“Luminance-contrast-aware foveated rendering” by Tursun, Arabadzhiyska-Koleva, Wernikowski, Mantiuk, Seidel, et al. …

  • ©Okan Tarhan Tursun, Elena Arabadzhiyska-Koleva, Marek Wernikowski, Radosław Mantiuk, Hans-Peter Seidel, Karol Myszkowski, and Piotr Didyk



Session Title:

    VR and AR


    Luminance-contrast-aware foveated rendering



    Current rendering techniques struggle to fulfill quality and power efficiency requirements imposed by new display devices such as virtual reality headsets. A promising solution to overcome these problems is foveated rendering, which exploits gaze information to reduce rendering quality for the peripheral vision where the requirements of the human visual system are significantly lower. Most of the current solutions model the sensitivity as a function of eccentricity, neglecting the fact that it also is strongly influenced by the displayed content. In this work, we propose a new luminance-contrast-aware foveated rendering technique which demonstrates that the computational savings of foveated rendering can be significantly improved if local luminance contrast of the image is analyzed. To this end, we first study the resolution requirements at different eccentricities as a function of luminance patterns. We later use this information to derive a low-cost predictor of the foveated rendering parameters. Its main feature is the ability to predict the parameters using only a low-resolution version of the current frame, even though the prediction holds for high-resolution rendering. This property is essential for the estimation of required quality before the full-resolution image is rendered. We demonstrate that our predictor can efficiently drive the foveated rendering technique and analyze its benefits in a series of user experiments.


    1. Hime Aguiar e Oliveira Junior, Lester Ingber, Antonio Petraglia, Mariane Rembold Petraglia, and Maria Augusta Soares Machado. 2012. Adaptive Simulated Annealing. Springer Berlin Heidelberg, Berlin, Heidelberg, 33–62.Google Scholar
    2. Rachel Albert, Anjul Patney, David Luebke, and Joohwan Kim. 2017. Latency requirements for foveated rendering in virtual reality. ACM Trans. on App. Perception (TAP) 14, 4 (2017), 25. Google ScholarDigital Library
    3. Elena Arabadzhiyska, Okan Tarhan Tursun, Karol Myszkowski, Hans-Peter Seidel, and Piotr Didyk. 2017. Saccade Landing Position Prediction for Gaze-Contingent Rendering. ACM Trans. Graph. (Proc. SIGGRAPH) 36, 4 (2017). Google ScholarDigital Library
    4. Peter GJ Barten. 1989. The square root integral (SQRI): a new metric to describe the effect of various display parameters on perceived image quality. In Human Vision, Visual Processing, and Digital Display, Vol. 1077. Int. Soc. for Optics and Photonics, 73–83.Google Scholar
    5. Peter GJ Barten. 1999. Contrast sensitivity of the human eye and its effects on image quality. Vol. 72. SPIE press.Google Scholar
    6. M.R. Bolin and G.W. Meyer. 1998. A Perceptually Based Adaptive Sampling Algorithm. In Proc. of SIGGRAPH. 299–310. Google ScholarDigital Library
    7. Chris Bradley, Jared Abrams, and Wilson S. Geisler. 2014. Retina-V1 model of detectability across the visual field. Journal of Vision 14, 12 (2014), 22.Google ScholarCross Ref
    8. P. Burt and E. Adelson. 1983. The Laplacian Pyramid as a Compact Image Code. IEEE Trans. on Communications 31, 4 (Apr 1983), 532–540.Google ScholarCross Ref
    9. M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi. 2014. Describing Textures in the Wild. In Proc. IEEE Conf. on Comp. Vision and Pattern Recognition (CVPR). Google ScholarDigital Library
    10. Christine A. Curcio and Kimberly A. Allen. 1990. Topography of ganglion cells in human retina. The Journal of Comparative Neurology 300, 1 (1990), 5–25.Google ScholarCross Ref
    11. Andrew T. Duchowski, David Bate, Paris Stringfellow, Kaveri Thakur, Brian J. Melloy, and Anand K. Gramopadhye. 2009. On spatiochromatic visual sensitivity and peripheral color LOD management. ACM Trans. on App. Perception 6, 2 (2009). Google ScholarDigital Library
    12. Andrew T. Duchowski, Donald H. House, Jordan Gestring, Rui I. Wang, Krzysztof Krejtz, Izabela Krejtz, Radosław Mantiuk, and Bartosz Bazyluk. 2014. Reducing visual discomfort of 3D stereoscopic sisplays with gaze-contingent depth-of-field. In Proc. ACM Symp. on Appl. Perc. (SAP). 39–46. Google ScholarDigital Library
    13. Andrew T. Duchowski and Bruce Howard McCormick. 1995. Preattentive considerations for gaze-contingent image processing, Vol. 2411.Google Scholar
    14. Joe Durbin. 2017. NVIDIA Estimates VR Is 20 Years Away From Resolutions That Match The Human Eye. https://uploadvr.com/nvidia-estimates-20-years-away-vr-eye-quality-resolution/. (May 2017). Accessed: 2019-01-10.Google Scholar
    15. Henry Griffith, Subir Biswas, and Oleg Komogortsev. 2018. Towards Reduced Latency in Saccade Landing Position Prediction Using Velocity Profile Methods. In Proc. Future Technologies Conf. (FTC) 2018, Kohei Arai, Rahul Bhatia, and Supriya Kapoor (Eds.). Springer Int. Publishing, Cham, 79–91.Google Scholar
    16. Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, and John Snyder. 2012. Foveated 3D graphics. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 31, 6 (2012). Google ScholarDigital Library
    17. David Hoffman, Zoe Meraz, and Eric Turner. 2018. Limits of peripheral acuity and implications for VR system design. Journal of the Soc. for Information Display 26, 8 (2018), 483–495.Google ScholarCross Ref
    18. David Jacobs, Orazio Gallo, Emily Cooper, Kari Pulli, and Marc Levoy. 2015. Simulating the visual experience of very bright and very dark scenes. ACM Trans. on Graph. (TOG) 34, 3 (2015), 25. Google ScholarDigital Library
    19. Changwon Jang, Kiseung Bang, Seokil Moon, Jonghyun Kim, Seungjae Lee, and Byoungho Lee. 2017. Retinal 3D: Augmented Reality Near-eye Display via Pupil-tracked Light Field Projection on Retina. ACM Trans. on Graph. 36, 6, Article 190 (2017), 190:1–190:13 pages. Google ScholarDigital Library
    20. Petr Kellnhofer, Piotr Didyk, Karol Myszkowski, Mohamed M Hefeeda, Hans-Peter Seidel, and Wojciech Matusik. 2016. GazeStereo3D: Seamless disparity manipulations. ACM Trans. Graph. (Proc. SIGGRAPH) 35, 4 (2016). Google ScholarDigital Library
    21. Joohwan Kim, Qi Sun, Fu-Chung Huang, Li-Yi Wei, David Luebke, and Arie E. Kaufman. 2017. Perceptual Studies for Foveated Light Field Displays. CoRR abs/1708.06034 (2017). arXiv:1708.06034 http://arxiv.org/abs/1708.06034Google Scholar
    22. G.E. Legge and J.M. Foley. 1980. Contrast masking in human vision. Journal of the Opt. Soc. of America 70, 12 (1980), 1458–1471.Google ScholarCross Ref
    23. J. Lubin. 1995. A visual discrimination model for imaging system design and development. In Vision models for target detection and recognition, Peli E. (Ed.). World Scientific, 245–283.Google Scholar
    24. James Mannos and David Sakrison. 1974. The effects of a visual fidelity criterion of the encoding of images. IEEE Trans. on Information Theory 20, 4 (1974), 525–536. Google ScholarDigital Library
    25. Radosław Mantiuk, Bartosz Bazyluk, and Anna Tomaszewska. 2011a. Gaze-dependent depth-of-field effect rendering in virtual environments. In Int Conf on Serious Games Dev & Appl. 1–12. Google ScholarDigital Library
    26. Rafal Mantiuk, Kil Joong Kim, Allan G. Rempel, and Wolfgang Heidrich. 2011b. HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Trans. Graph. (Proc. SIGGRAPH) (2011). Google ScholarDigital Library
    27. Belen Masia, Gordon Wetzstein, Piotr Didyk, and Diego Gutierrez. 2013. A survey on computational displays: Pushing the boundaries of optics, computation, and perception. Computers & Graphics 37, 8 (2013), 1012–1038. Google ScholarDigital Library
    28. Michael Mauderer, Simone Conte, Miguel A. Nacenta, and Dhanraj Vishwanath. 2014. Depth perception with gaze-contingent depth of field. In Proc Human Fact in Comp Sys (CHI). 217–226. Google ScholarDigital Library
    29. Morgan McGuire. 2017. Computer Graphics Archive. (July 2017). https://casual-effects.com/dataGoogle Scholar
    30. Olivier Mercier, Yusufu Sulai, Kevin Mackenzie, Marina Zannoli, James Hillis, Derek Nowrouzezahrai, and Douglas Lanman. 2017. Fast Gaze-contingent Optimal Decompositions for Multifocal Displays. ACM Trans. on Graph. 36, 6, Article 237 (Nov. 2017), 237:1–237:15 pages. Google ScholarDigital Library
    31. Cornelis Noorlander, Jan J. Koenderink, Ron J. Den Olden, and B. Wigbold Edens. 1983. Sensitivity to spatiotemporal colour contrast in the peripheral visual field. Vision Research 23, 1 (1983).Google Scholar
    32. Nvidia. 2018. VRWorks – Variable Rate Shading (VRS) website. https://developer.nvidia.com/vrworks/graphics/variablerateshading. (2018). Accessed: 2019-01-09.Google Scholar
    33. Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards foveated rendering for gaze-tracked virtual reality. ACM Trans. Graph. 35, 6 (2016), 179. Google ScholarDigital Library
    34. E. Peli. 1990. Contrast in complex images. Journal of the Opt. Soc. of America 7, 10 (1990), 2033–2040.Google Scholar
    35. Eli Peli, Jian Yang, and Robert B. Goldstein. 1991. Image invariance with changes in size: the role of peripheral contrast thresholds. J. Opt. Soc. Am. A 8, 11 (Nov 1991), 1762–1774.Google ScholarCross Ref
    36. Daniel Pohl, Xucong Zhang, and Andreas Bulling. 2016. Combining eye tracking with optimizations for lens astigmatism in modern wide-angle HMDs. In Virtual Reality (VR), 2016 IEEE. IEEE, 269–270.Google ScholarCross Ref
    37. Simon JD Prince and Brian J Rogers. 1998. Sensitivity to disparity corrugations in peripheral vision. Vision Research 38, 17 (1998).Google Scholar
    38. Mahesh Ramasubramanian, Sumanta N. Pattanaik, and Donald P. Greenberg. 1999. A Perceptually Based Physical Error Metric for Realistic Image Synthesis. In Proc. 26th Annual Conf. on Comp. Graphics and Interactive Techniques (SIGGRAPH). ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 73–82. Google ScholarDigital Library
    39. Martin Reddy. 2001. Perceptually optimized 3D graphics. IEEE Comp. Graphics and Applications 21, 5 (2001), 68–75. Google ScholarDigital Library
    40. L Ronchi and G. Molesini. 1975. Depth of Focus in Peripheral Vision. Ophthalmic Res 7, 3 (1975), 152–157.Google ScholarCross Ref
    41. Stephen Sebastian, Johannes Burge, and Wilson S. Geisler. 2015. Defocus blur discrimination in natural images with natural optics. Journal of Vision 15, 5 (2015), 16.Google ScholarCross Ref
    42. Mark Segal, Kurt Akeley, C Frazier, J Leech, and P Brown. 2013. The OpenGL Graphics System: A Specification (Version 4.4 (Core Profile) – October 18, 2013). (2013).Google Scholar
    43. Michael Stengel, Steve Grogorick, Martin Eisemann, and Marcus Magnor. 2016a. Adaptive Image-Space Sampling for Gaze-Contingent Real-time Rendering. Comp. Graphics Forum 35, 4 (2016), 129–139. Google ScholarDigital Library
    44. Michael Stengel, Steve Grogorick, Martin Eisemann, and Marcus Magnor. 2016b. Adaptive image-space sampling for gaze-contingent real-time rendering. In Comp Graph Forum, Vol. 35. 129–139. Google ScholarDigital Library
    45. Hans Strasburger, Ingo Rentschler, and Martin Jüttner. 2011. Peripheral vision and pattern recognition: A review. Journal of Vision 11, 5 (2011).Google ScholarCross Ref
    46. Qi Sun, Fu-Chung Huang, Joohwan Kim, Li-Yi Wei, David Luebke, and Arie Kaufman. 2017. Perceptually-guided Foveation for Light Field Displays. ACM Trans. Graph. 36, 6, Article 192 (Nov. 2017), 192:1–192:13 pages. Google ScholarDigital Library
    47. Nicholas T. Swafford, José A. Iglesias-Guitian, Charalampos Koniaris, Bochang Moon, Darren Cosker, and Kenny Mitchell. 2016. User, Metric, and Computational Evaluation of Foveated Rendering Methods. In Proc. ACM Symposium on App. Perception (SAP ’16). 7–14. Google ScholarDigital Library
    48. Unity3D. 2018. Official website. https://unity3d.com/. (2018). Accessed: 2019-01-09.Google Scholar
    49. Karthik Vaidyanathan, Marco Salvi, Robert Toth, Tim Foley, Tomas Akenine-Möller, Jim Nilsson, Jacob Munkberg, Jon Hasselgren, Masamichi Sugihara, Petrik Clarberg, et al. 2014. Coarse pixel shading. In High Performance Graphics. Google ScholarDigital Library
    50. Carlin Vieri, Grace Lee, Nikhil Balram, Sang Hoon Jung, Joon Young Yang, Soo Young Yoon, and In Byeong Kang. 2018. An 18 megapixel 4.3″1443 ppi 120 Hz OLED display for wide field of view high acuity head mounted displays. Journal of the Soc. for Information Display (2018).Google Scholar
    51. B. Wang and K.J. Ciuffreda. 2005. Blur discrimination of the human eye in the near retinal periphery. Optom Vis Sci. 82, 1 (2005), 52–58.Google Scholar
    52. Andrew B. Watson. 2014. A formula for human retinal ganglion cell receptive field density as a function of visual field location. Journal of Vision 14, 7 (2014), 15.Google ScholarCross Ref
    53. Andrew B. Watson and Albert J. Ahumada. 2011. Blur clarified: A review and synthesis of blur discrimination. Journal of Vision 11, 5 (2011), 10.Google ScholarCross Ref
    54. M. Weier, M. Stengel, T. Roth, P. Didyk, E. Eisemann, M. Eisemann, S. Grogorick, A. Hinkenjann, E. Kruijff, M. Magnor, K. Myszkowski, and P. Slusallek. 2017. Perception-driven Accelerated Rendering. Comp. Graphics Forum 36, 2 (2017), 611–643. Google ScholarDigital Library
    55. Wenjun Zeng, S. Daly, and Shawmin Lei. 2000. Point-wise extended visual masking for JPEG-2000 image compression. In Proc. Int. Conf. on Image Processing, Vol. 1. 657–660.Google Scholar
    56. Wenjun Zeng, Scott Daly, and Shawmin Lei. 2001. An Overview of the Visual Optimization Tools in JPEG 2000. Signal Processing: Image communication Journal 17, 1 (2001), 85–104.Google Scholar

ACM Digital Library Publication:

Overview Page: