“Tonal stabilization of video” by Farbman and Lischinski
Conference:
Type(s):
Title:
- Tonal stabilization of video
Presenter(s)/Author(s):
Abstract:
This paper presents a method for reducing undesirable tonal fluctuations in video: minute changes in tonal characteristics, such as exposure, color temperature, brightness and contrast in a sequence of frames, which are easily noticeable when the sequence is viewed. These fluctuations are typically caused by the camera’s automatic adjustment of its tonal settings while shooting.Our approach operates on a continuous video shot by first designating one or more frames as anchors. We then tonally align a sequence of frames with each anchor: for each frame, we compute an adjustment map that indicates how each of its pixels should be modified in order to appear as if it was captured with the tonal settings of the anchor. The adjustment map is efficiently updated between successive frames by taking advantage of temporal video coherence and the global nature of the tonal fluctuations. Once a sequence has been aligned, it is possible to generate smooth tonal transitions between anchors, and also further control its tonal characteristics in a consistent and principled manner, which is difficult to do without incurring strong artifacts when operating on unstable sequences. We demonstrate the utility of our method using a number of clips captured with a variety of video cameras, and believe that it is well-suited for integration into today’s non-linear video editing tools.
References:
1. Agarwal, V., Abidi, B., Koschan, A., and Abidi, M. 2006. An overview of color constancy algorithms. JPRR 1, 1, 42–54.Google ScholarCross Ref
2. An, X., and Pellacini, F. 2008. AppProp: all-pairs appearance-space edit propagation. ACM Trans. Graph. 27, 3, Article 40. Google ScholarDigital Library
3. An, X., and Pellacini, F. 2010. User-controllable color transfer. Comput. Graph. Forum 29, 2, 263–271.Google ScholarCross Ref
4. Bishop, C. M. 2007. Pattern Recognition and Machine Learning (Information Science and Statistics), 1st ed. 2006. corr. 2nd printing ed. Springer, October. Google Scholar
5. Candocia, F. M., and Mandarino, D. A. 2005. A semi-parametric model for accurate camera response function modeling and exposure estimation from comparametric data. IEEE Trans. Image Proc. 14 (Aug), 1138–1150. Google ScholarDigital Library
6. Debevec, P. E., and Malik, J. 1997. Recovering high dynamic range radiance maps from photographs. In Proc. ACM SIGGRAPH 97, T. Whitted, Ed., 369–378. Google Scholar
7. Farbman, Z., Fattal, R., and Lischinski, D. 2010. Diffusion maps for edge-aware image editing. In ACM SIGGRAPH Asia 2010 papers, ACM, New York, NY, USA, 145:1–145:10. Google Scholar
8. Fowlkes, C., Belongie, S., Chung, F., and Malik, J. 2004. Spectral grouping using the Nyström method. IEEE Trans. Pattern Anal. Mach. Intell. 26, 2, 214–225. Google ScholarDigital Library
9. Hordley, S. 2006. Scene illuminant estimation: Past, present, and future. Color Res. Appl. 31, 4, 303–314.Google ScholarCross Ref
10. Kagarlitsky, S., Moses, Y., and Hel-Or, Y. 2009. Piecewise-consistent color mappings of images acquired under various conditions. In Proc. ICCV.Google Scholar
11. Khan, E. A., Reinhard, E., Fleming, R. W., and Bülthoff, H. H. 2006. Image-based material editing. ACM Trans. Graph. 25, 3 (July), 654–663. Google ScholarDigital Library
12. Levin, A., Lischinski, D., and Weiss, Y. 2004. Colorization using optimization. ACM Trans. Graph. 23, 3, 689–694. Google ScholarDigital Library
13. Li, Y., Adelson, E. H., and Agarwala, A. 2008. Scribble-Boost: adding classification to edge-aware interpolation of local image and video adjustments. Computer Graphics Forum 27, 4 (June), 1255–1264. Google ScholarDigital Library
14. Liu, F., Gleicher, M., Jin, H., and Agarwala, A. 2009. Content-preserving warps for 3D video stabilization. ACM Trans. Graph. 28 (July), 44:1–9. Google ScholarDigital Library
15. Mann, S., and Picard, R. W. 1995. On being `undigital’ with digital cameras: Extending dynamic range by combining differently exposed pictures. In Proceedings of IS&T, 442–448.Google Scholar
16. Matsushita, Y., Ofek, E., Ge, W., Tang, X., and Shum, H.-Y. 2006. Full-frame video stabilization with motion inpainting. IEEE Trans. Pattern Anal. Mach. Intell. 28 (July), 1150–1163. Google ScholarDigital Library
17. Mitsunaga, T., and Nayar, S. 1999. Radiometric self calibration. In Proc. IEEE CVPR, 374–380.Google Scholar
18. Oh, B. M., Chen, M., Dorsey, J., and Durand, F. 2001. Image-based modeling and photo editing. In Proc. ACM SIGGRAPH 2001, ACM, E. Fiume, Ed., 433–442. Google Scholar
19. Paris, S., and Durand, F. 2006. A fast approximation of the bilateral filter using a signal processing approach. In Proc. ECCV, IEEE, IV: 568–580. Google Scholar
20. Poynton, C. 2003. Digital Video and HDTV Algorithms and Interfaces, 1 ed. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. Google Scholar
21. Reinhard, E., Ashikhmin, M., Gooch, B., and Shirley, P. 2001. Color transfer between images. IEEE Comput. Graph. Appl. 21, 5, 34–41. Google ScholarDigital Library
22. Shepard, D. 1968. A two-dimensional interpolation function for irregularly-spaced data. In Proceedings of the 1968 23rd ACM national conference, ACM, New York, NY, USA, ACM ’68, 517–524. Google Scholar
23. Simon, D. 2006. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches. John Wiley & Sons Inc. Google Scholar
24. Tai, Y., Jia, J., and keung Tang, C. 2005. Local color transfer via probabilistic segmentation by expectation-maximization. In Proc. IEEE CVPR, 747–754. Google Scholar
25. Tomasi, C., and Manduchi, R. 1998. Bilateral filtering for gray and color images. In Proc. ICCV ’98, IEEE, 839–846. Google ScholarDigital Library
26. Tsin, Y., Ramesh, V., and Kanade, T. 2001. Statistical calibration of CCD imaging process. In Proc. ICCV, vol. I, 480–487.Google Scholar
27. van de Weijer, J., Gevers, T., and Gijsenij, A. 2007. Edge-based color constancy. IEEE Trans. Image Proc. 16, 9 (Sept.), 2207–2214. Google ScholarCross Ref