“GazeStereo3D: seamless disparity manipulations”

  • ©

Conference:


Type(s):


Title:

    GazeStereo3D: seamless disparity manipulations

Session/Category Title:   DISPLAY SOFTWARE


Presenter(s)/Author(s):


Moderator(s):



Abstract:


    Producing a high quality stereoscopic impression on current displays is a challenging task. The content has to be carefully prepared in order to maintain visual comfort, which typically affects the quality of depth reproduction. In this work, we show that this problem can be significantly alleviated when the eye fixation regions can be roughly estimated. We propose a new method for stereoscopic depth adjustment that utilizes eye tracking or other gaze prediction information. The key idea that distinguishes our approach from the previous work is to apply gradual depth adjustments at the eye fixation stage, so that they remain unnoticeable. To this end, we measure the limits imposed on the speed of disparity changes in various depth adjustment scenarios, and formulate a new model that can guide such seamless stereoscopic content processing. Based on this model, we propose a real-time controller that applies local manipulations to stereoscopic content to find the optimum between depth reproduction and visual comfort. We show that the controller is mostly immune to the limitations of low-cost eye tracking solutions. We also demonstrate benefits of our model in off-line applications, such as stereoscopic movie production, where skillful directors can reliably guide and predict viewers’ attention or where attended image regions are identified during eye tracking sessions. We validate both our model and the controller in a series of user experiments. They show significant improvements in depth perception without sacrificing the visual quality when our techniques are applied.

References:


    1. Banks, M., Sekuler, A., and Anderson, S. 1991. Peripheral spatial vision: Limits imposed by optics, photoreceptors, and receptor pooling. J Opt Soc Am A 8, 11, 1775–87.Google ScholarCross Ref
    2. Becker, W., and Juergens, R. 1975. Saccadic reactions to double-step stimuli: Evidence for model feedback and continuous information uptake. In Basic Mechanisms of Ocular Motility and their Clinical Implications, 519–527.Google Scholar
    3. Bernhard, M., Dell’mour, C., Hecher, M., Stavrakis, E., and Wimmer, M. 2014. The effects of fast disparity adjustment in gaze-controlled stereoscopic applications. In Proc. Symp. on Eye Tracking Research and Appl. (ETRA), 111–118. Google ScholarDigital Library
    4. Bradshaw, M. F., and Rogers, B. J. 1999. Sensitivity to horizontal and vertical corrugations defined by binocular disparity. Vision Res. 39, 18, 3049–56.Google ScholarCross Ref
    5. Brookes, A., and Stevens, K. A. 1989. The analogy between stereo depth and brightness. Perception 18, 5, 601–614.Google ScholarCross Ref
    6. Butler, D. J., Wulff, J., Stanley, G. B., and Black, M. J. 2012. A naturalistic open source movie for optical flow evaluation. In European Conf. on Computer Vision (ECCV), Springer-Verlag, A. Fitzgibbon et al. (Eds.), Ed., Part IV, LNCS 7577, 611–625. Google ScholarDigital Library
    7. Chamaret, C., Godeffroy, S., Lopez, P., and Le Meur, O. 2010. Adaptive 3D rendering based on region-of-interest. In Proc. SPIE vol. 7524, 0V-1-12.Google ScholarCross Ref
    8. Chapiro, A., Heinzle, S., Aydn, T. O., Poulakos, S., Zwicker, M., Smolic, A., and Gross, M. 2014. Optimizing stereo-to-multiview conversion for autostereoscopic displays. Computer Graphics Forum 33, 2, 63–72. Google ScholarDigital Library
    9. Cisarik, P. M., and Harwerth, R. S. 2005. Stereoscopic depth magnitude estimation: Effects of stimulus spatial frequency and eccentricity. Behavioural Brain Research 160, 1, 88–98.Google ScholarCross Ref
    10. Coutant, B. E., and Westheimer, G. 1993. Population distribution of stereoscopic ability. Ophthalmic and Physiological Optics 13, 1, 3–7.Google ScholarCross Ref
    11. Cumming, B. G. 1995. The relationship between stereoacuity and stereomotion thresholds. Perception 24, 1, 105–114.Google ScholarCross Ref
    12. Didyk, P., Ritschel, T., Eisemann, E., Myszkowski, K., and Seidel, H.-P. 2010. Adaptive image-space stereo view synthesis. In Vision, Modeling and Visualization Workshop, 299–306.Google Scholar
    13. Didyk, P., Ritschel, T., Eisemann, E., Myszkowski, K., and Seidel, H.-P. 2011. A perceptual model for disparity. ACM Trans. Graph. (Proc. SIGGRAPH) 30, 4, 96. Google ScholarDigital Library
    14. Didyk, P., Ritschel, T., Eisemann, E., Myszkowski, K., Seidel, H.-P., and Matusik, W. 2012. A luminance-contrast-aware disparity model and applications. ACM Trans. Graph. (Proc. SIGGRAPH) 31, 6, 184. Google ScholarDigital Library
    15. Duchowski, A. T., House, D. H., Gestring, J., Congdon, R., Świrski, L., Dodgson, N. A., Krejtz, K., and Krejtz, I. 2014. Comparing estimated gaze depth in virtual and physical environments. In Proc. Symp. on Eye Tracking Res. and Appl. (ETRA), 103–110. Google ScholarDigital Library
    16. Duchowski, A. T., House, D. H., Gestring, J., Wang, R. I., Krejtz, K., Krejtz, I., Mantiuk, R., and Bazyluk, B. 2014. Reducing visual discomfort of 3D stereoscopic displays with gaze-contingent depth-of-field. In Proc. ACM Symp. on Appl. Perc. (SAP), 39–46. Google ScholarDigital Library
    17. Fisker, M., Gram, K., Thomsen, K. K., Vasilarou, D., and Kraus, M. 2013. Automatic convergence adjustment for stereoscopy using eye tracking. In Eurographics 2013-Posters, 23–24.Google Scholar
    18. Geisler, W. S., and Perry, J. S. 1998. A real-time foveated multiresolution system for low-bandwidth video communication. In Proc. SPIE vol. 3299, 294–305.Google Scholar
    19. Guenter, B., Finch, M., Drucker, S., Tan, D., and Snyder, J. 2012. Foveated 3D graphics. ACM Transactions on Graphics (Proc SIGGRAPH Asia) 31, 6, 164. Google ScholarDigital Library
    20. Hanhart, P., and Ebrahimi, T. 2014. Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience. In Proc. SPIE vol. 9011, 0D-1-11.Google Scholar
    21. Harris, J. M., and Watamaniuk, S. N. 1995. Speed discrimination of motion-in-depth using binocular cues. Vision Research 35, 7, 885–896.Google ScholarCross Ref
    22. Harris, J. M., McKee, S. P., and Watamaniuk, S. N. 1998. Visual search for motion-in-depth: Stereomotion does not ‘pop out’ from disparity noise. Nature Neuroscience 1, 2, 165–168.Google ScholarCross Ref
    23. Hoffman, D., Girshick, A., Akeley, K., and Banks, M. 2008. Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. J. Vision 8, 3, 1–30.Google ScholarCross Ref
    24. Hoffman, D. M., Karasev, V. I., and Banks, M. S. 2011. Temporal presentation protocols in stereoscopic displays: Flicker visibility, perceived motion, and perceived depth. Journal of the Society for Information Display 19, 3, 271–297.Google ScholarCross Ref
    25. Jacobs, D., Gallo, O., A. Cooper, E., Pulli, K., and Levoy, M. 2015. Simulating the visual experience of very bright and very dark scenes. ACM Trans. Graph. 34, 3, 25:1–25:15. Google ScholarDigital Library
    26. Jones, G. R., Lee, D., Holliman, N. S., and Ezra, D. 2001. Controlling perceived depth in stereoscopic images. In SPIE vol. 4297, 42–53.Google Scholar
    27. Kane, D., Guan, P., and Banks, M. S. 2014. The limits of human stereopsis in space and time. The Journal of Neuroscience 34, 4, 1397–1408.Google ScholarCross Ref
    28. Kim, T., Park, J., Lee, S., and Bovik, A. C. 2014. 3D visual discomfort prediction based on physiological optics of binocular vision and foveation. In Asia-Pacific Signal and Information Proc. Assoc. (APSIPA), 1–4.Google Scholar
    29. Komogortsev, O. V., and Khan, J. I. 2008. Eye movement prediction by Kalman filter with integrated linear horizontal oculomotor plant mechanical model. In Proc. Symp. on Eye Tracking Res. and Appl. (ETRA), 229–236. Google ScholarDigital Library
    30. Krishnan, V., Farazian, F., and Stark, L. 1973. An analysis of latencies and prediction in the fusional vergence system. Am. J. Optometry and Arch. Am. Academy of Optometry 50, 933–9.Google ScholarCross Ref
    31. Lang, M., Hornung, A., Wang, O., Poulakos, S., Smolic, A., and Gross, M. 2010. Nonlinear disparity mapping for stereoscopic 3D. ACM Trans. Graph. (Proc. SIGGRAPH) 29, 4, 75. Google ScholarDigital Library
    32. Liu, S., and Hua, H. 2008. Spatialchromatic foveation for gaze contingent displays. In Proc. Symp. on Eye Tracking Res. and Appl. (ETRA), 139–142. Google ScholarDigital Library
    33. Loschky, L. C., and Wolverton, G. S. 2007. How late can you update gaze-contingent multiresolutional displays without detection? ACM Trans. Multimedia Comput. Commun. Appl. 3, 4, 7:1–7:10. Google ScholarDigital Library
    34. Mantiuk, R., Bazyluk, B., and Tomaszewska, A. 2011. Gaze-dependent depth-of-field effect rendering in virtual environments. In Serious Games Development and Appl. 1–12. Google ScholarDigital Library
    35. Masia, B., Wetzstein, G., Aliaga, C., Raskar, R., and Gutierrez, D. 2013. Display adaptive 3D content remapping. Computers & Graphics 37, 8, 983–996. Google ScholarDigital Library
    36. McConkie, G. W., and Loschky, L. C. 2002. Perception onset time during fixations in free viewing. Behavior Research Methods, Instruments, & Computers 34, 4, 481–490.Google ScholarCross Ref
    37. Murphy, H., and Duchowski, A. T. 2001. Gaze-contingent level of detail rendering. Eurographics Short Presntations.Google Scholar
    38. Oskam, T., Hornung, A., Bowles, H., Mitchell, K., and Gross, M. H. 2011. OSCAM-optimized stereoscopic camera control for interactive 3D. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 30, 6, 189. Google ScholarDigital Library
    39. Peli, E., Hedges, T. R., Tang, J., and Landmann, D. 2001. A binocular stereoscopic display system with coupled convergence and accommodation demands. In SID Symposium Digest of Technical Papers, vol. 32, 1296–1299.Google Scholar
    40. Portfors-Yeomans, C., and Regan, D. 1996. Cyclopean discrimination thresholds for the direction and speed of motion in depth. Vision Research 36, 20, 3265–3279.Google ScholarCross Ref
    41. Rawlings, S. C., and Shipley, T. 1969. Stereoscopic acuity and horizontal angular distance from fixation. J. Opt. Soc. Am. 59, 8, 991–993.Google ScholarCross Ref
    42. Reinhard, E., Ward, G., Debevec, P., Pattanaik, S., Heidrich, W., and Myszkowski, K. 2010. High Dynamic Range Imaging. Morgan Kaufmann Publishers, 2nd edition.Google Scholar
    43. Scharstein, D., Hirschmüller, H., Kitajima, Y., Krathwohl, G., Nešić, N., Wang, X., and Westling, P. 2014. High-resolution stereo datasets with subpixel-accurate ground truth. In Pattern Recognition. Springer, 31–42.Google Scholar
    44. Semmlow, J., and Wetzel, P. 1979. Dynamic contributions of the components of binocular vergence. J Opt Soc Am A 69, 639–45.Google ScholarCross Ref
    45. Semmlow, J., Hung, G., and Ciuffreda, K. 1986. Quantitative assessment of disparity vergence components. Invest. Ophthalmol. Vis. Sci. 27, 558–64.Google Scholar
    46. Sherstyuk, A., Dey, A., Sandor, C., and State, A. 2012. Dynamic eye convergence for head-mounted displays improves user performance in virtual environments. In Proc I3D, 23–30. Google ScholarDigital Library
    47. Shibata, T., Kim, J., Hoffman, D. M., and Banks, M. S. 2011. The zone of comfort: Predicting visual discomfort with stereo displays. J. Vision 11, 8, 11.Google ScholarCross Ref
    48. Templin, K., Didyk, P., Myszkowski, K., Hefeeda, M. M., Seidel, H.-P., and Matusik, W. 2014. Modeling and optimizing eye vergence response to stereoscopic cuts. ACM Transactions on Graphics (Proc. SIGGRAPH) 33, 4. Google ScholarDigital Library
    49. Tyler, C. W. 1975. Spatial organization of binocular disparity sensitivity. Vision Research 15, 5, 583–590.Google ScholarCross Ref
    50. Vinnikov, M., and Allison, R. S. 2014. Gaze-contingent depth of field in realistic scenes: The user experience. In Proc. Symp. on Eye Tracking Res. and Appl. (ETRA), 119–126. Google ScholarDigital Library
    51. Watson, A. B., and Pelli, D. G. 1983. QUEST: A Bayesian adaptive psychometric method. Perception & Psychophysics 33, 2, 113–120.Google ScholarCross Ref
    52. Zhang, J., and Sclaroff, S. 2013. Saliency detection: a Boolean map approach. In Proc. of the IEEE International Conference on Computer Vision (ICCV). Google ScholarDigital Library
    53. Zilly, F., Kluger, J., and Kauff, P. 2011. Production rules for stereo acquisition. Proc. IEEE 99, 4, 590–606.Google ScholarCross Ref
    54. Zwicker, M., Matusik, W., Durand, F., and Pfister, H. 2006. Antialiasing for automultiscopic 3D displays. In Proceedings of the 17th Eurographics Conference on Rendering Techniques, Eurographics Association, 73–82. Google ScholarDigital Library


ACM Digital Library Publication:



Overview Page: