“Real-time video abstraction” by Winnemöller, Olsen and Gooch

  • ©Holger Winnemöller, Sven C. Olsen, and Bruce Gooch

Conference:


Type:


Title:

    Real-time video abstraction

Presenter(s)/Author(s):



Abstract:


    We present an automatic, real-time video and image abstraction framework that abstracts imagery by modifying the contrast of visually important features, namely luminance and color opponency. We reduce contrast in low-contrast regions using an approximation to anisotropic diffusion, and artificially increase contrast in higher contrast regions with difference-of-Gaussian edges. The abstraction step is extensible and allows for artistic or data-driven control. Abstracted images can optionally be stylized using soft color quantization to create cartoon-like effects with good temporal coherence. Our framework design is highly parallel, allowing for a GPU-based, real-time implementation. We evaluate the effectiveness of our abstraction framework with a user-study and find that participants are faster at naming abstracted faces of known persons compared to photographs. Participants are also better at remembering abstracted images of arbitrary scenes in a memory task.

References:


    1. Arad, N., and Gotsman, C. 1999. Enhancement by image-dependent warping. IEEE Trans. on Image Processing 8, 9, 1063–1074. Google ScholarDigital Library
    2. Barash, D., and Comaniciu, D. 2004. A common framework for non-linear diffusion, adaptive smoothing, bilateral filtering and mean shift. Image and Video Computing 22, 1, 73–81.Google ScholarCross Ref
    3. Boomgaard, R. V. D., and de Weijer, J. V. 2002. On the equivalence of local-mode finding, robust estimation and mean-shift analysis as used in early vision tasks. 16th Internat. Conf. on Pattern Recog. 3, 927–390. Google ScholarDigital Library
    4. Canny, J. F. 1986. A computational approach to edge detection. IEEE Trans. on Pattern Analysis and Machine Intelligence 8, 769–798. Google ScholarDigital Library
    5. Collomosse, J. P., Rowntree, D., and Hall, P. M. 2005. Stroke surfaces: Temporally coherent artistic animations from video. IEEE Trans. on Visualization and Computer Graphics 11, 5, 540–549. Google ScholarDigital Library
    6. DeCarlo, D., and Santella, A. 2002. Stylization and abstraction of photographs. ACM Trans. Graph. 21, 3, 769–776. Google ScholarDigital Library
    7. Elder, J. H. 1999. Are edges incomplete? Internat. Journal of Computer Vision 34, 2-3, 97–122. Google ScholarDigital Library
    8. Fischer, J., Bartz, D., and Strasser, W. 2005. Stylized Augmented Reality for Improved Immersion. In Proc. of IEEE VR, 195–202. Google ScholarDigital Library
    9. Gooch, B., Reinhard, E., and Gooch, A. 2004. Human facial illustrations: Creation and psychophysical evaluation. ACM Trans. Graph. 23, 1, 27–44. Google ScholarDigital Library
    10. Hertzmann, A. 2001. Paint by relaxation. In CGI ’01:Computer Graphics Internat. 2001, 47–54. Google ScholarDigital Library
    11. Itti, L., and Koch, C. 2001. Computational modeling of visual attention. Nature Reviews Neuroscience 2, 3, 194–203.Google ScholarCross Ref
    12. Loviscach, J. 1999. Scharfzeichner: Klare bilddetails durch verformung. Computer Technik 22, 236ff.Google Scholar
    13. Marr, D., and Hildreth, E. C. 1980. Theory of edge detection. Proc. Royal Soc. London, Bio. Sci. 207, 187–217.Google ScholarCross Ref
    14. Palmer, S. E. 1999. Vision Science: Photons to Phenomenology. The MIT Press.Google Scholar
    15. Perona, P., and Malik, J. 1991. Scale-space and edge detection using anisotropic diffusion. IEEE Trans. on Pattern Analysis and Machine Intelligence 12, 7. Google ScholarDigital Library
    16. Pham, T. Q., and Vliet, L. J. V. 2005. Separable bilateral filtering for fast video preprocessing. In IEEE Internat. Conf. on Multimedia & Expo, CD1-4.Google Scholar
    17. Privitera, C. M., and Stark, L. W. 2000. Algorithms for defining visual regions-of-interest: Comparison with eye fixations. IEEE Trans. on Pattern Analysis and Machine Intelligence 22, 9, 970–982. Google ScholarDigital Library
    18. Raskar, R., Tan, K.-H., Feris, R., Yu, J., and Turk, M. 2004. Non-photorealistic camera: depth edge detection and stylized rendering using multi-flash imaging. ACM Trans. Graph. 23, 3, 679–688. Google ScholarDigital Library
    19. Saito, T., and Takahashi, T. 1990. Comprehensible rendering of 3-D shapes. In Proc. of ACM SIGGRAPH 90, 197–206. Google ScholarDigital Library
    20. Santella, A., and DeCarlo, D. 2004. Visual interest and NPR: an evaluation and manifesto. In Proc. of NPAR ’04, 71–78. Google ScholarDigital Library
    21. Stevenage, S. V. 1995. Can caricatures really produce distinctiveness effects? British Journal of Psychology 86, 127–146.Google ScholarCross Ref
    22. Tomasi, C., and Manduchi, R. 1998. Bilateral filtering for gray and color images. In Proceedings of ICCV ’98, 839. Google ScholarDigital Library
    23. Wang, J., Xu, Y., Shum, H.-Y., and Cohen, M. F. 2004. Video tooning. ACM Trans. Graph. 23, 3, 574–583. Google ScholarDigital Library
    24. Winkenbach, G., and Salesin, D. H. 1994. Computer-generated pen-and-ink illustration. In Proc. of ACM SIGGRAPH 94, 91–100. Google ScholarDigital Library
    25. Wyszecki, G., and Styles, W. 1982. Color Science: Concepts and Methods, Quantitative Data and Formulae. Wiley, New York, NY.Google Scholar


ACM Digital Library Publication:



Overview Page: