“Optimizing depth perception in virtual and augmented reality through gaze-contingent stereo rendering” by Krajancich, Kellnhofer and Wetzstein – ACM SIGGRAPH HISTORY ARCHIVES

“Optimizing depth perception in virtual and augmented reality through gaze-contingent stereo rendering” by Krajancich, Kellnhofer and Wetzstein

  • 2020 SA Technical Papers_Krajancich_Optimizing depth perception in virtual and augmented reality through gaze-contingent stereo rendering

Conference:


Type(s):


Title:

    Optimizing depth perception in virtual and augmented reality through gaze-contingent stereo rendering

Session/Category Title:   VR and Real-time Techniques


Presenter(s)/Author(s):



Abstract:


    Virtual and augmented reality (VR/AR) displays crucially rely on stereoscopic rendering to enable perceptually realistic user experiences. Yet, existing near-eye display systems ignore the gaze-dependent shift of the no-parallax point in the human eye. Here, we introduce a gaze-contingent stereo rendering technique that models this effect and conduct several user studies to validate its effectiveness. Our findings include experimental validation of the location of the no-parallax point, which we then use to demonstrate significant improvements of disparity and shape distortion in a VR setting, and consistent alignment of physical and digitally rendered objects across depths in optical see-through AR. Our work shows that gaze-contingent stereo rendering improves perceptual realism and depth perception of emerging wearable computing systems.

References:


    1. Kurt Akeley, Simon J. Watt, Ahna R. Girshick, and Martin S. Banks. 2004. A Stereo Display Prototype with Multiple Focal Distances. ACM Trans. Graph. (SIGGRAPH) 23, 3 (2004), 804–813.Google ScholarDigital Library
    2. Kaan Akşit, Ward Lopes, Jonghyun Kim, Peter Shirley, and David Luebke. 2017. Near-eye varifocal augmented reality display using see-through screens. ACM Transactions on Graphics 36, 6 (Nov. 2017), 189:1–189:13.Google ScholarDigital Library
    3. David Atchison and George Smith. 2000. Optics of the human eye. Butterworth Heinemann.Google Scholar
    4. David A. Atchison. 2017. Schematic Eyes. In Handbook of Visual Optics, Volume I – Fundamentals and Eye Optics, Pablo Artal (Ed.). CRC Press, Chapter 16.Google Scholar
    5. Geoffrey P. Bingham. 1993. Optical flow from eye movement with head immobilized: “Ocular occlusion” beyond the nose. Vision Research 33, 5 (March 1993), 777–789.Google ScholarCross Ref
    6. Johannes Burge, Charless C. Fowlkes, and Martin S. Banks. 2010. Natural-Scene Statistics Predict How the Figure-Ground Cue of Convexity Affects Human Depth Perception. Journal of Neuroscience 30, 21 (May 2010), 7269–7280.Google ScholarCross Ref
    7. Jen-Hao Rick Chang, B. V. K. Vijaya Kumar, and Aswin C. Sankaranarayanan. 2018. Towards multifocal displays with dense focal stacks. ACM Transactions on Graphics 37, 6 (Dec. 2018), 198:1–198:13.Google Scholar
    8. Piotr Didyk, Tobias Ritschel, Elmar Eisemann, Karol Myszkowski, and Hans-Peter Seidel. 2011. A Perceptual Model for Disparity. ACM Transactions on Graphics (Proceedings SIGGRAPH 2011, Vancouver) 30, 4 (2011).Google ScholarDigital Library
    9. Neil A. Dodgson. 2004. Variation and extrema of human interpupillary distance. In Stereoscopic Displays and Virtual Reality Systems XI, Mark T. Bolas, Andrew J. Woods, John O. Merritt, and Stephen A. Benton (Eds.), Vol. 5291. International Society for Optics and Photonics, SPIE, 36 — 46.Google Scholar
    10. Andrew T. Duchowski, Nathan Cournia, and Hunter A. Murphy. 2004. Gaze-Contingent Displays: A Review. Cyberpsychology & behavior 7 (2004), 621–34.Google Scholar
    11. Andrew T. Duchowski, Donald H. House, Jordan Gestring, Rui I. Wang, Krzysztof Krejtz, Izabela Krejtz, Radoslaw Mantiuk, and Bartosz Bazyluk. 2014. Reducing Visual Discomfort of 3D Stereoscopic Displays with Gaze-contingent Depth-of-field. In Proc. ACM Symp. on Appl. Perc. (SAP). 39–46.Google ScholarDigital Library
    12. David Dunn, Cary Tippets, Kent Torell, Petr Kellnhofer, Kaan Akşit, Piotr Didyk, Karol Myszkowski, David Luebke, and Henry Fuchs. 2017. Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors. IEEE TVCG 23, 4 (2017), 1322–1331.Google Scholar
    13. Wilson S. Geisler and Jeffrey S. Perry. 1998. Real-time foveated multiresolution system for low-bandwidth video communication. In Human Vision and Electronic Imaging III, Vol. 3299. International Society for Optics and Photonics, 294–305.Google Scholar
    14. Andrew Glennerster, Brian J. Rogers, and Mark F. Bradshaw. 1996. Stereoscopic depth constancy depends on the subject’s task. Vision Research 36, 21 (Nov. 1996), 3441–3456.Google ScholarCross Ref
    15. Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, and John Snyder. 2012. Foveated 3D Graphics. ACM Trans. Graph. (SIGGRAPH Asia) 31, 6 (2012), 164:1–164:10.Google Scholar
    16. Philippe Hanhart and Touradj Ebrahimi. 2014. Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience. In Proc. SPIE vol. 9011. 0D-1–11.Google Scholar
    17. Sebastien Hillaire, Anatole Lecuyer, Remi Cozot, and Gery Casiez. 2008. Using an Eye-Tracking System to Improve Camera Motions and Depth-of-Field Blur Effects in Virtual Environments. In IEEE Virtual Reality Conference. 47–50.Google ScholarCross Ref
    18. Hong Hua and Bahram Javidi. 2014. A 3D integral imaging optical see-through head-mounted display. Optics Express 22, 11 (2014), 13484–13491.Google ScholarCross Ref
    19. Fu-Chung Huang, Kevin Chen, and Gordon Wetzstein. 2015. The Light Field Stereoscope: Immersive Computer Graphics via Factored Near-Eye Light Field Display with Focus Cues. ACM Trans. Graph. (SIGGRAPH) 34, 4 (2015).Google ScholarDigital Library
    20. Yuta Itoh, Toshiyuki Amano, Daisuke Iwai, and Gudrun Klinker. 2016. Gaussian Light Field: Estimation of Viewpoint-Dependent Blur for Optical See-Through Head-Mounted Displays. IEEE Transactions on Visualization and Computer Graphics 22, 11 (Nov. 2016), 2368–2376.Google ScholarDigital Library
    21. Yuta Itoh and Gudrun Klinker. 2014. Interaction-free calibration for optical see-through head-mounted displays based on 3D Eye localization. In 2014 IEEE Symposium on 3D User Interfaces (3DUI). 75–82.Google ScholarCross Ref
    22. David Jacobs, Orazio Gallo, Emily A. Cooper, Kari Pulli, and Marc Levoy. 2015. Simulating the Visual Experience of Very Bright and Very Dark Scenes. ACM Trans. Graph. 34, 3, Article 25 (2015), 15 pages.Google ScholarDigital Library
    23. Changwon Jang, Kiseung Bang, Gang Li, and Byoungho Lee. 2018. Holographic Near-Eye Display with Expanded Eye-Box. ACM Trans. Graph. 37, 6, Article 195 (Dec. 2018), 14 pages.Google ScholarDigital Library
    24. Changwon Jang, Kiseung Bang, Seokil Moon, Jonghyun Kim, Seungjae Lee, and Byoungho Lee. 2017. Retinal 3D: Augmented Reality near-Eye Display via Pupil-Tracked Light Field Projection on Retina. ACM Trans. Graph. (SIGGRAPH Asia) 36, 6 (2017).Google Scholar
    25. Adam L Janin, David W Mizell, and Thomas P Caudell. 1993. Calibration of head-mounted displays for augmented reality applications. In Proc. IEEE Virtual Reality. 246–255.Google ScholarDigital Library
    26. Paul V. Johnson, Jared AQ. Parnell, Joohwan Kim, Christopher D. Saunter, Gordon D. Love, and Martin S. Banks. 2016. Dynamic lens and monovision 3D displays to improve viewer comfort. OSA Opt. Express 24, 11 (2016), 11808–11827.Google ScholarCross Ref
    27. Anton S Kaplanyan, Anton Sochenov, Thomas Leimkühler, Mikhail Okunev, Todd Goodall, and Gizem Rufo. 2019. DeepFovea: neural reconstruction for foveated rendering and video compression using learned statistics of natural videos. ACM Transactions on Graphics (TOG) 38, 6 (2019), 1–13.Google ScholarDigital Library
    28. Petr Kellnhofer, Piotr Didyk, Karol Myszkowski, Mohamed M. Hefeeda, Hans-Peter Seidel, and Wojciech Matusik. 2016a. GazeStereo3D: Seamless Disparity Manipulations. ACM Transactions on Graphics (Proc. SIGGRAPH) 35, 4 (2016).Google Scholar
    29. Petr Kellnhofer, Piotr Didyk, Tobias Ritschel, Belen Masia, Karol Myszkowski, and Hans-Peter Seidel. 2016b. Motion parallax in stereo 3D: model and applications. ACM Transactions on Graphics 35, 6 (Nov. 2016), 176:1–176:12.Google ScholarDigital Library
    30. Jonghyun Kim, Youngmo Jeong, Michael Stengel, Kaan Akşit, Rachel Albert, Ben Boudaoud, Trey Greer, Joohwan Kim, Ward Lopes, Zander Majercik, et al. 2019. Foveated AR: dynamically-foveated augmented reality display. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1–15.Google ScholarDigital Library
    31. Robert Konrad, Anastasios Angelopoulos, and Gordon Wetzstein. 2020. Gaze-contingent ocular parallax rendering for virtual reality. ACM Transactions on Graphics (TOG) 39, 2 (2020), 1–12.Google ScholarDigital Library
    32. Robert Konrad, Emily Cooper, and Gordon Wetzstein. 2015. Novel Optical Configurations for Virtual Reality: Evaluating User Preference and Performance with Focus-tunable and Monovision Near-eye Displays. In Proc. SIGCHI.Google Scholar
    33. George A. Koulieris, Kaan Akşit, Michael Stengel, Rafał K. Mantiuk, Katerina Mania, and Christian Richardt. 2019. Near-Eye Display and Tracking Technologies for Virtual and Augmented Reality. Computer Graphics Forum 38, 2 (2019).Google Scholar
    34. Hiroaki Kudo and Noboru Ohnishi. 2000. Effect of the sight line shift when a head-mounted display is used. In Proc. EMBS International Conference, Vol. 1. 548–550.Google ScholarCross Ref
    35. Hiroaki Kudo, Masaya Saito, Tsuyoshi Yamamura, and Noboru Ohnishi. 1999. Measurement of the ability in monocular depth perception during gazing at near visual target-effect of the ocular parallax cue. In Proc. IEEE International Conference on Systems, Man, and Cybernetics, Vol. 2. 34–37.Google ScholarCross Ref
    36. Douglas Lanman and David Luebke. 2013. Near-eye Light Field Displays. ACM Trans. Graph. (SIGGRAPH Asia) 32, 6 (2013), 220:1–220:10.Google Scholar
    37. Sheng Liu, Dewen Cheng, and Hong Hua. 2008. An Optical See-through Head Mounted Display with Addressable Focal Planes. In Proc. ISMAR. 33–42.Google Scholar
    38. David Luebke and Benjamin Hallen. 2001. Perceptually driven simplification for interactive rendering. In Rendering Techniques 2001. Springer, 223–234.Google ScholarDigital Library
    39. Andrew Maimone, Andreas Georgiou, and Joel S. Kollin. 2017. Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics 36, 4 (July 2017), 85:1–85:16.Google ScholarDigital Library
    40. Radosław Mantiuk, Bartosz Bazyluk, and Anna Tomaszewska. 2011. Gaze-dependent depth-of-field effect rendering in virtual environments. In Serious Games Development and Appl. 1–12.Google Scholar
    41. Radoslaw Mantiuk and Mateusz Markowski. 2013. Gaze-Dependent Tone Mapping. In ICIAR.Google Scholar
    42. Michael Mauderer, Simone Conte, Miguel A. Nacenta, and Dhanraj Vishwanath. 2014. Depth Perception with Gaze-Contingent Depth of Field. In Proc. SIGCHI. 217–226.Google ScholarDigital Library
    43. Michael Mauderer, David R. Flatla, and Miguel A. Nacenta. 2016. Gaze-Contingent Manipulation of Color Perception. Proc. SIGCHI (2016).Google Scholar
    44. Olivier Mercier, Yusufu Sulai, Kevin Mackenzie, Marina Zannoli, James Hillis, Derek Nowrouzezahrai, and Douglas Lanman. 2017. Fast Gaze-contingent Optimal Decompositions for Multifocal Displays. ACM Trans. Graph. (SIGGRAPH Asia) 36, 6 (2017).Google Scholar
    45. Hunter Murphy and Andrew Duchowski. 2001. Gaze-Contingent Level Of Detail Rendering. EuroGraphics 2001 (01 2001).Google Scholar
    46. Toshikazu Ohshima, Hiroyuki Yamamoto, and Hideyuki Tamura. 1996. Gaze-directed adaptive rendering for interacting with virtual space. In Proc. IEEE VR. IEEE, 103–110.Google ScholarCross Ref
    47. Nitish Padmanaban, Robert Konrad, Tal Stramer, Emily A. Cooper, and Gordon Wetzstein. 2017. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays. PNAS 114, 9 (2017), 2183–2188.Google ScholarCross Ref
    48. Nitish Padmanaban, Yifan Peng, and Gordon Wetzstein. 2019. Holographic Near-Eye Displays Based on Overlap-Add Stereograms. ACM Trans. Graph. (SIGGRAPH Asia) 6 (2019). Issue 38.Google Scholar
    49. Jae-Hyeung Park and Seong-Bok Kim. 2018. Optical see-through holographic near-eye-display with eyebox steering and depth of field control. Opt. Express 26, 21 (2018), 27076–27088.Google ScholarCross Ref
    50. Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards foveated rendering for gaze-tracked virtual reality. ACM Transactions on Graphics (TOG) 35, 6 (Nov. 2016), 179:1–179:12.Google ScholarDigital Library
    51. Eli Peli, T Reed Hedges, Jinshan Tang, and Dan Landmann. 2001. A Binocular Stereoscopic Display System with Coupled Convergence and Accommodation Demands. In SID Symposium Digest of Technical Papers, Vol. 32. 1296–1299.Google ScholarCross Ref
    52. Yifan Peng, Suyeon Choi, Nitish Padmanaban, Jonghyun Kim, and Gordon Wetzstein. 2020. Neural Holography. In ACM SIGGRAPH Emerging Technologies.Google Scholar
    53. Alexander Plopski, Yuta Itoh, Christian Nitschke, Kiyoshi Kiyokawa, Gudrun Klinker, and Haruo Takemura. 2015. Corneal-Imaging Calibration for Optical See-Through Head-Mounted Displays. IEEE Transactions on Visualization and Computer Graphics 21, 4 (2015), 481–490.Google ScholarDigital Library
    54. Joshua Ratcliff, Alexey Supikov, Santiago Alfaro, and Ronald Azuma. 2020. ThinVR: Heterogeneous microlens arrays for compact, 180 degree FOV VR near-eye displays. IEEE Transactions on Visualization and Computer Graphics 26, 5 (May 2020), 1981–1990.Google ScholarCross Ref
    55. Whitman Richards and John F Miller. 1969. Convergence as a cue to depth. Perception & Psychophysics 5, 5 (1969), 317–320.Google ScholarCross Ref
    56. Jannick P. Rolland, Myron W. Krueger, and Alexei Goon. 2000. Multifocal planes head-mounted displays. OSA Appl. Opt. 39, 19 (2000), 3209–3215.Google ScholarCross Ref
    57. Heiko H Schütt, Stefan Harmeling, Jakob H Macke, and Felix A Wichmann. 2016. Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data. Vision Research 122 (2016), 105–123.Google ScholarCross Ref
    58. Liang Shi, Fu-Chung Huang, Ward Lopes, Wojciech Matusik, and David Luebke. 2017. Near-eye Light Field Holographic Rendering with Spherical Waves for Wide Field of View Interactive 3D Computer Graphics. ACM Trans. Graph. (SIGGRAPH Asia) 36, 6, Article 236 (2017), 236:1–236:17 pages.Google Scholar
    59. Takashi Shibata, Joohwan Kim, David M. Hoffman, and Martin S. Banks. 2011. The zone of comfort: Predicting visual discomfort with stereo displays. Journal of Vision 11, 8 (2011), 11–11.Google ScholarCross Ref
    60. Y Shin, HW Lim, MH Kang, M Seong, H Cho, and JH Kim. 2016. Normal range of eye movement and its relationship to age. Acta Ophthalmologica 94 (2016).Google Scholar
    61. Qi Sun, Fu-Chung Huang, Joohwan Kim, Li-Yi Wei, David Luebke, and Arie Kaufman. 2017. Perceptually-guided foveation for light field displays. ACM Transactions on Graphics 36, 6 (Nov. 2017), 192:1–192:13.Google ScholarDigital Library
    62. Mihran Tuceryan and Nassir Navab. 2000. Single point active alignment method (SPAAM) for optical see-through HMD calibration for AR. In Proceedings IEEE and ACM International Symposium on Augmented Reality (ISAR 2000). 149–158.Google ScholarCross Ref
    63. Jacek Turski. 2016. On binocular vision: The geometric horopter and Cyclopean eye. Vision research 119 (2016), 73–81.Google Scholar
    64. Margarita Vinnikov and Robert S. Allison. 2014. Gaze-contingent Depth of Field in Realistic Scenes: The User Experience. In Proc. Symp. on Eye Tracking Res. and Appl. (ETRA). 119–126.Google Scholar
    65. Božo Vojniković and Ettore Tamajo. 2013. Horopters-Definition and Construction. Collegium antropologicum 37, 1 (2013), 9–12.Google Scholar
    66. Felix A Wichmann and N Jeremy Hill. 2001. The psychometric function: I. Fitting, sampling, and goodness of fit. Perception & psychophysics 63, 8 (2001), 1293–1313.Google Scholar
    67. Albert Yonas, Lincoln G. Craton, and William B. Thompson. 1987. Relative motion: Kinetic information for the order of depth at an edge. Perception & Psychophysics 41, 1 (01 Jan 1987), 53–59.Google Scholar


ACM Digital Library Publication:



Overview Page:



Submit a story:

If you would like to submit a story about this presentation, please contact us: historyarchives@siggraph.org