“Temporally Coherent Video De-Anaglyph” by Roo and Richardt

  • ©Joan Sol Roo and Christian Richardt

  • ©Joan Sol Roo and Christian Richardt

  • ©Joan Sol Roo and Christian Richardt

Conference:


Type(s):


Entry Number: 98

Title:

    Temporally Coherent Video De-Anaglyph

Presenter(s)/Author(s):



Abstract:


    For a long time, stereoscopic 3D videos were usually encoded and shown in the anaglyph format. This format combines the two stereo views into a single color image by splitting its color spectrum and assigning each view to one half of the spectrum, for example red for the left and cyan (blue+green) for the right view. Glasses with matching color filters then separate the color channels again to provide the appropriate view to each eye. This simplicity made anaglyph stereo a popular choice for showing stereoscopic content, as it works with existing screens, projectors and print media. However, modern stereo displays and projectors natively support two full-color views, and avoid the viewing discomfort associated with anaglyph videos.
    Our work investigates how to convert existing anaglyph videos to the full-color stereo format used by modern displays. Anaglyph videos only contain half the color information compared to the full-color videos, and the missing color channels need to be reconstructed from the existing ones in a plausible and temporally coherent fashion. Joulin and Kang [2013] propose an approach that works well for images, but their extension to video is limited by the heavy computational complexity of their approach. Other techniques only support single images and when applied to each frame of a video generally produce flickering results.
    In our approach, we put the temporal coherence of the stereo results front and center by expressing Joulin and Kang’s approach within the practical temporal consistency framework of Lang et al. [2012]. As a result, our approach is both efficient and temporally coherent. In addition, it computes temporally coherent optical flow and disparity maps that can be used for various post-processing tasks.


Additional Information:


    We will make our C++ implementations of SIFT flow, the domain transform and Lang et al.’s temporal consistency framework available on our project website http://tempconst.gforge.inria.fr/.

References:


    1. Gastal, E. S. L., and Oliveira, M. M. 2011. Domain transform for edge-aware image and video processing. ACM Transactions on Graphics 30, 4, 69:1–12.
    2. Joulin, A., and Kang, S. B. 2013. Recovering stereo pairs from anaglyphs. In Proceedings of CVPR, 289–296.
    3. Lang, M., Wang, O., Aydin, T., Smolic, A., and Gross, M. 2012. Practical temporal consistency for image-based graphics applications. ACM Transactions on Graphics 31, 4, 34:1–8.
    4. Liu, C., Yuen, J., and Torralba, A. 2011. SIFT flow: Dense correspondence across scenes and its applications. Transactions on Pattern Analysis and Machine Intelligence 33, 5, 978–994.

Website:



Additional Images:

©Joan Sol Roo and Christian Richardt ©Joan Sol Roo and Christian Richardt ©Joan Sol Roo and Christian Richardt ©Joan Sol Roo and Christian Richardt ©Joan Sol Roo and Christian Richardt

PDF:



ACM Digital Library Publication:



Overview Page: