“Perceptual Requirements for World-Locked Rendering in AR and VR” by Guan, Penner, Hegland, Letham and Lanman
Conference:
Type(s):
Title:
- Perceptual Requirements for World-Locked Rendering in AR and VR
Session/Category Title: Technical Papers Fast-Forward
Presenter(s)/Author(s):
Abstract:
Stereoscopic, head-tracked display systems can show users realistic, world-locked virtual objects and environments. However, discrepancies between the rendering pipeline and physical viewing conditions can lead to perceived instability in the rendered content resulting in reduced realism, immersion, and, potentially, visually-induced motion sickness. The requirements to achieve perceptually stable world-locked rendering are unknown due to the challenge of constructing a wide field of view, distortion-free display with highly accurate head- and eye-tracking. In this work we introduce new hardware and software built upon recently introduced hardware and present a system capable of rendering virtual objects over real-world references without perceivable drift under such constraints. The platform is used to study acceptable errors in render camera position for world-locked rendering in augmented and virtual reality scenarios, where we find an order of magnitude difference in perceptual sensitivity between them. We conclude by comparing study results with an analytic model which examines changes to apparent depth and visual direction in response to camera displacement errors; this analysis identifies visual direction as an important consideration for world-locked rendering alongside depth errors from incorrect disparity.
References:
[1]
Avi M. Aizenman, George A. Koulieris, Agostino Gibaldi, Vibhor Sehgal, Dennis M. Levi, and Martin S. Banks. 2022. The Statistics of Eye Movements and Binocular Disparities during VR Gaming: Implications for Headset Design. ACM Trans. Graph. (jul 2022). https://doi.org/10.1145/3549529
[2]
Ronald Azuma and Gary Bishop. 1994. Improving static and dynamic registration in an optical see-through HMD. In Proceedings of the 21st annual conference on Computer graphics and interactive techniques. 197–204.
[3]
Michael Bach and Kerstin Schäfer. 2016. Visual Acuity Testing: Feedback Affects Neither Outcome nor Reproducibility, but Leaves Participants Happier. PLOS ONE 11 (01 2016), 1–11. https://doi.org/10.1371/journal.pone.0147803
[4]
Martin Banks, Emily Cooper, and Elise Piazza. 2014. Camera Focal Length and the Perception of Pictures. Ecological psychology : a publication of the International Society for Ecological Psychology 26 (05 2014), 30–46. https://doi.org/10.1080/10407413.2014.877284
[5]
Frank A Biocca and Jannick P Rolland. 1998. Virtual eyes can rearrange your body: Adaptation to visual displacement in see-through, head-mounted displays. Presence 7, 3 (1998), 262–277. https://doi.org/10.1162/105474698565703
[6]
Mark F. Bradshaw, Andrew Glennerster, and Brian J. Rogers. 1996. The effect of display size on disparity scaling from differential perspective and vergence cues. Vision Research 36, 9 (1996), 1255–1264. https://doi.org/10.1016/0042-6989(95)00190-5
[7]
David Brewster. 1845. 1. On the Law of Visible Position in Single and Binocular Vision, and on the representation of Solid Figures by the Union of dissimilar Plane Pictures on the Retina. Proceedings of the Royal Society of Edinburgh 1 (1845), 405–406.
[8]
Johannes Burge and Wilson S Geisler. 2011. Optimal defocus estimation in individual natural images. Proceedings of the National Academy of Sciences 108, 40 (2011), 16849–16854.
[9]
John S. Butler, Stuart T. Smith, Jennifer L. Campos, and Heinrich H. Bülthoff. 2010. Bayesian integration of visual and vestibular signals for heading. Journal of Vision 10, 11 (09 2010), 23–23. https://doi.org/10.1167/10.11.23
[10]
Gaurav Chaurasia, Arthur Nieuwoudt, Alexandru-Eugen Ichim, Richard Szeliski, and Alexander Sorkine-Hornung. 2020. Passthrough+ real-time stereoscopic view synthesis for mobile mixed reality. Proceedings of the ACM on Computer Graphics and Interactive Techniques 3, 1 (2020), 1–17.
[11]
Emily A. Cooper, Johannes Burge, and Martin S. Banks. 2011. The vertical horopter is not adaptable, but it may be adaptive. Journal of Vision 11, 3 (03 2011), 20–20. https://doi.org/10.1167/11.3.20
[12]
Sarah H Creem-Regehr, Jeanine K Stefanucci, and Bobby Bodenheimer. 2023. Perceiving distance in virtual reality: theoretical insights from contemporary technologies. Philosophical Transactions of the Royal Society B 378, 1869 (2023), 20210456.
[13]
RA Crone and OMA Leuridan. 1973. Tolerance for aniseikonia. Albrecht von Graefes Archiv für klinische und experimentelle Ophthalmologie 188, 1 (1973), 1–16.
[14]
Bruce G. Cumming, Elizabeth B. Johnston, and Andrew J. Parker. 1991. Vertical disparities and perception of three-dimensional shape. Nature 349, 6308 (January 1991), 411—413. https://doi.org/10.1038/349411a0
[15]
Luigi F. Cuturi and Paul R. MacNeilage. 2014. Optic Flow Induces Nonvisual Self-Motion Aftereffects. Current Biology 24, 23 (2014), 2817–2821. https://doi.org/10.1016/j.cub.2014.10.015
[16]
Fatima El Jamiy and Ronald Marsh. 2019. Survey on depth perception in head mounted displays: distance estimation in virtual reality, augmented reality, and mixed reality. IET Image Processing 13, 5 (2019), 707–712.
[17]
Christopher R. Fetsch, Amanda H. Turner, Gregory C. DeAngelis, and Dora E. Angelaki. 2009. Dynamic Reweighting of Visual and Vestibular Cues during Self-Motion Perception. Journal of Neuroscience 29, 49 (2009), 15601–15612. https://doi.org/10.1523/JNEUROSCI.2574-09.2009
[18]
Jacqueline M Fulvio, Huiyuan Miao, and Bas Rokers. 2021. Head jitter enhances three-dimensional motion perception. Journal of vision 21, 3 (2021), 12–12.
[19]
Ying Geng, Jacques Gollier, Brian Wheelwright, Fenglin Peng, Yusufu Sulai, Brant Lewis, Ning Chan, Wai Sze Tiffany Lam, Alexander Fix, Douglas Lanman, Yijing Fu, Alexander Sohn, Brett Bryars, Nelson Cardenas, Youngshik Yoon, and Scott McEldowney. 2018. Viewing optics for immersive near-eye displays: pupil swim/size and weight/stray light. In Digital Optics for Immersive Displays, Vol. 10676. International Society for Optics and Photonics, SPIE, 19–35. https://doi.org/10.1117/12.2307671
[20]
Phillip Guan and Martin S. Banks. 2016. Stereoscopic depth constancy. Philosophical transactions of the Royal Society of London. Series B, Biological sciences 371, 1697 (2016), 20150253–20150253. https://doi.org/10.1098/rstb.2015.0253
[21]
Phillip Guan, Olivier Mercier, Michael Shvartsman, and Douglas Lanman. 2022. Perceptual Requirements for Eye-Tracked Distortion Correction in VR. In ACM SIGGRAPH 2022 Conference Proceedings (Vancouver, BC, Canada) (SIGGRAPH ’22). Association for Computing Machinery, New York, NY, USA, Article 51, 8 pages. https://doi.org/10.1145/3528233.3530699
[22]
David R Hampton and Andrew E Kertesz. 1983. The extent of Panum’s area and the human cortical magnification factor. Perception 12, 2 (1983), 161–165.
[23]
Brittney Hartle and Laurie M. Wilcox. 2022. Stereoscopic depth constancy for physical objects and their virtual counterparts. Journal of Vision 22, 4 (03 2022), 9–9. https://doi.org/10.1167/jov.22.4.9
[24]
Robert T. Held and Martin S. Banks. 2008. Misperceptions in Stereoscopic Displays: A Vision Science Perspective. In Proceedings of the 5th Symposium on Applied Perception in Graphics and Visualization. Association for Computing Machinery, New York, NY, USA, 23–32. https://doi.org/10.1145/1394281.1394285
[25]
Robert T. Held, Emily A. Cooper, and Martin S. Banks. 2012. Blur and Disparity Are Complementary Cues to Depth. Current Biology 22 (2012), 426–431.
[26]
David M Hoffman, Vasiliy I Karasev, and Martin S Banks. 2011. Temporal presentation protocols in stereoscopic displays: Flicker visibility, perceived motion, and perceived depth. Journal of the Society for Information Display 19, 3 (2011), 271–297.
[27]
Richard L Holloway. 1997. Registration error analysis for augmented reality. Presence: Teleoperators & Virtual Environments 6, 4 (1997), 413–432.
[28]
Alex D. Hwang and Eli Peli. 2019. Stereoscopic Three-dimensional Optic Flow Distortions Caused by Mismatches Between Image Acquisition and Display Parameters. Journal of Imaging Science and Technology 63, 6 (2019), 60412–1–60412–7(7). https://doi.org/10.2352/j.imagingsci.technol.2019.63.6.060412
[29]
Yuta Itoh, Tobias Langlotz, Jonathan Sutton, and Alexander Plopski. 2021. Towards indistinguishable augmented reality: A survey on optical see-through head-mounted displays. ACM Computing Surveys (CSUR) 54, 6 (2021), 1–36.
[30]
Elizabeth B. Johnston. 1991. Systematic distortions of shape from stereopsis. Vision Research 31, 7 (1991), 1351–1360. https://doi.org/10.1016/0042-6989(91)90056-B
[31]
Jonathan W. Kelly, Melissa Burton, Brice Pollock, Eduardo Rubio, Michael Curtis, Julio De La Cruz, Stephen Gilbert, and Eliot Winer. 2013. Space Perception in Virtual Environments: Displacement from the Center of Projection Causes Less Distortion than Predicted by Cue-Based Models. ACM Trans. Appl. Percept. 10, 4, Article 18 (oct 2013), 23 pages. https://doi.org/10.1145/2536764.2536765
[32]
Robert Konrad, Anastasios Angelopoulos, and Gordon Wetzstein. 2020. Gaze-Contingent Ocular Parallax Rendering for Virtual Reality. ACM Trans. Graph. 39, 2, Article 10, 12 pages. https://doi.org/10.1145/3361330
[33]
Brooke Krajancich, Petr Kellnhofer, and Gordon Wetzstein. 2020. Optimizing Depth Perception in Virtual and Augmented Reality through Gaze-Contingent Stereo Rendering. ACM Trans. Graph. 39, 6, Article 269 (Nov. 2020), 10 pages. https://doi.org/10.1145/3414685.3417820
[34]
Scott A. Kuhl, William B. Thompson, and Sarah H. Creem-Regehr. 2008. HMD Calibration and Its Effects on Distance Judgments. In Proceedings of the 5th Symposium on Applied Perception in Graphics and Visualization (Los Angeles, California) (APGV ’08). Association for Computing Machinery, New York, NY, USA, 15–22. https://doi.org/10.1145/1394281.1394284
[35]
Grace Kuo, Eric Penner, Seth Moczydlowski, Alexander Ching, Douglas Lanman, and Nathan Matsuda. 2023. Perspective-Correct VR Passthrough Without Reprojection. In ACM SIGGRAPH 2023 Conference Proceedings (Los Angeles, CA, USA) (SIGGRAPH ’23). Association for Computing Machinery, New York, NY, USA, Article 15, 9 pages. https://doi.org/10.1145/3588432.3591534
[36]
Joong Ho Lee, Sei-young Kim, Hae Cheol Yoon, Bo Kyung Huh, and Ji-Hyung Park. 2013. A Preliminary Investigation of Human Adaptations for Various Virtual Eyes in Video See-through HMDS. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Paris, France) (CHI ’13). Association for Computing Machinery, New York, NY, USA, 309–312. https://doi.org/10.1145/2470654.2470698
[37]
Joong Ho Lee and Ji-Hyung Park. 2020. Visuomotor adaptation to excessive visual displacement in video see-through HMDs. Virtual Reality 24, 2 (2020), 211–221.
[38]
Sangyoon Lee, Xinda Hu, and Hong Hua. 2015. Effects of optical combiner and IPD change for convergence on near-field depth perception in an optical see-through HMD. IEEE transactions on visualization and computer graphics 22, 5 (2015), 1540–1554.
[39]
Paul R. MacNeilage, Zhou Zhang, Gregory C. DeAngelis, and Dora E. Angelaki. 2012. Vestibular Facilitation of Optic Flow Parsing. PLOS ONE 7, 7 (07 2012), 1–8. https://doi.org/10.1371/journal.pone.0040264
[40]
Andrew Maimone and Junren Wang. 2020. Holographic Optics for Thin and Lightweight Virtual Reality. ACM Trans. Graph. 39, 4, Article 67 (aug 2020), 14 pages. https://doi.org/10.1145/3386569.3392416
[41]
Suzanne P. Mckee and Ken Nakayama. 1984. The detection of motion in the peripheral visual field. Vision Research 24, 1 (1984), 25–32. https://doi.org/10.1016/0042-6989(84)90140-8
[42]
DE Mitchell. 1966. A review of the concept of “Panum’s fusional areas”. Optometry and Vision Science 43, 6 (1966), 387–401.
[43]
Ken Nakayama and Shinsuke Shimojo. 1990. Da vinci stereopsis: Depth and subjective occluding contours from unpaired image points. Vision Research 30, 11 (1990), 1811–1825. https://doi.org/10.1016/0042-6989(90)90161-D Optics Physiology and Vision.
[44]
Lucy Owen, Jonathan Browder, Benjamin Letham, Gideon Stocek, Chase Tymms, and Michael Shvartsman. 2021. Adaptive Nonparametric Psychophysics. arxiv:2104.09549 [stat.ME]
[45]
Nitish Padmanaban, Robert Konrad, Tal Stramer, Emily A Cooper, and Gordon Wetzstein. 2017. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays. Proceedings of the National Academy of Sciences 114, 9 (2017), 2183–2188.
[46]
DA Palmer. 1961. Measurement of the horizontal extent of Panum’s area by a method of constant stimuli. Optica Acta: International Journal of Optics 8, 2 (1961), 151–159.
[47]
Brice Pollock, Melissa Burton, Jonathan W. Kelly, Stephen Gilbert, and Eliot Winer. 2012. The Right View from the Wrong Location: Depth Perception in Stereoscopic Multi-User Virtual Environments. IEEE Transactions on Visualization and Computer Graphics 18, 4 (2012), 581–588. https://doi.org/10.1109/TVCG.2012.58
[48]
Kevin Ponto, Michael Gleicher, Robert G Radwin, and Hyun Joon Shin. 2013. Perceptual calibration for immersive display environments. IEEE transactions on visualization and computer graphics 19, 4 (2013), 691–700.
[49]
Warren Robinett and Jannick P Rolland. 1992. A computational model for the stereoscopic optics of a head-mounted display. Presence: Teleoperators & Virtual Environments 1, 1 (1992), 45–62.
[50]
Jannick Rolland, Yonggang Ha, and Cali Fidopiastis. 2004. Albertian errors in head-mounted displays: I. Choice of eye-point location for a near- or far-field task visualization. J. Opt. Soc. Am. A 21, 6 (Jun 2004), 901–912. https://doi.org/10.1364/JOSAA.21.000901
[51]
Jannick P. Rolland and Terry Hopkins. 1993. A Method of Computational Correction for Optical Distortion in Head-Mounted Displays. Technical Report TR93-045. University of North Carolina at Chapel Hill. http://www.cs.unc.edu/techreports/93-045.pdf
[52]
Takashi Shibata, Joohwan Kim, David M Hoffman, and Martin S Banks. 2011. The zone of comfort: Predicting visual discomfort with stereo displays. Journal of vision 11, 8 (2011), 11–11.
[53]
Akinari Takagi, Shoichi Yamazaki, Yoshihiro Saito, and Naosato Taniguchi. 2000. Development of a stereo video see-through HMD for AR systems. In Proceedings IEEE and ACM International Symposium on Augmented Reality (ISAR 2000). IEEE, 68–77.
[54]
Jonathan Tong, Robert S. Allison, and Laurie M. Wilcox. 2019. The Impact of Radial Distortions in VR Headsets on Perceived Surface Slant. Electronic Imaging, Human Vision and Electronic Imaging 11 (2019), 60409–1–60409–11. https://doi.org/10.2352/J.ImagingSci.Technol.2019.63.6.060409
[55]
Jonathan Tong, Robert S. Allison, and Laurie M. Wilcox. 2020. Optical distortions in VR bias the perceived slant of moving surfaces. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 73–79. https://doi.org/
[56]
Dhanraj Vishwanath, Ahna Girshick, and Martin Banks. 2005. Why Pictures Look Right When Viewed from the Wrong Place. Nature neuroscience 8 (11 2005), 1401–10. https://doi.org/10.1038/nn1553
[57]
John P. Wann, Simon Rushton, and Mark Mon-Williams. 1995. Natural problems for stereoscopic depth perception in virtual environments. Vision Research 35, 19 (1995), 2731–2736. https://doi.org/10.1016/0042-6989(95)00018-U
[58]
Simon J. Watt, Kurt Akeley, Marc O. Ernst, and Martin S. Banks. 2005. Focus cues affect perceived depth. Journal of Vision 5, 10 (12 2005), 7–7. https://doi.org/10.1167/5.10.7
[59]
James P. Wilmott, Ian M. Erkelens, T. Scott Murdison, and Kevin W. Rio. 2022. Perceptibility of Jitter in Augmented Reality Head-Mounted Displays. In 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 470–478. https://doi.org/10.1109/ISMAR55827.2022.00063
[60]
Andrew J. Woods, Tom Docherty, and Rolf Koch. 1993. Image distortions in stereoscopic video systems. In Stereoscopic Displays and Applications IV, Vol. 1915. International Society for Optics and Photonics, SPIE, 36–48. https://doi.org/10.1117/12.157041
[61]
Lei Xiao, Salah Nouri, Joel Hegland, Alberto Garcia Garcia, and Douglas Lanman. 2022. NeuralPassthrough: Learned Real-Time View Synthesis for VR. In ACM SIGGRAPH 2022 Conference Proceedings. 1–9.
[62]
Marina Zannoli, Gordon D Love, Rahul Narain, and Martin S Banks. 2016. Blur and the perception of depth at occlusions. Journal of Vision 16, 6 (2016), 17–17.
[63]
Fangcheng Zhong, Akshay Jindal, Ali Özgür Yöntem, Param Hanji, Simon J. Watt, and Rafał K. Mantiuk. 2021. Reproducing Reality with a High-Dynamic-Range Multi-Focal Stereo Display. ACM Trans. Graph. 40, 6, Article 241 (dec 2021), 14 pages. https://doi.org/10.1145/3478513.3480513


