“Saccade landing position prediction for gaze-contingent rendering” by Arabadzhiyska-Koleva, Tursun, Myszkowski, Seidel and Didyk

  • ©Elena Arabadzhiyska-Koleva, Okan Tarhan Tursun, Karol Myszkowski, Hans-Peter Seidel, and Piotr Didyk

Conference:


Type:


Title:

    Saccade landing position prediction for gaze-contingent rendering

Session/Category Title: People Power


Presenter(s)/Author(s):


Moderator(s):



Abstract:


    Gaze-contingent rendering shows promise in improving perceived quality by providing a better match between image quality and the human visual system requirements. For example, information about fixation allows rendering quality to be reduced in peripheral vision, and the additional resources can be used to improve the quality in the foveal region. Gaze-contingent rendering can also be used to compensate for certain limitations of display devices, such as reduced dynamic range or lack of accommodation cues. Despite this potential and the recent drop in the prices of eye trackers, the adoption of such solutions is hampered by system latency which leads to a mismatch between image quality and the actual gaze location. This is especially apparent during fast saccadic movements when the information about gaze location is significantly delayed, and the quality mismatch can be noticed. To address this problem, we suggest a new way of updating images in gaze-contingent rendering during saccades. Instead of rendering according to the current gaze position, our technique predicts where the saccade is likely to end and provides an image for the new fixation location as soon as the prediction is available. While the quality mismatch during the saccade remains unnoticed due to saccadic suppression, a correct image for the new fixation is provided before the fixation is established. This paper describes the derivation of a model for predicting saccade landing positions and demonstrates how it can be used in the context of gaze-contingent rendering to reduce the influence of system latency on the perceived quality. The technique is validated in a series of experiments for various combinations of display frame rate and eye-tracker sampling rate.

References:


    1. Richard Andersson, Linnéa Larsson, Kenneth Holmqvist, Martin Stridh, and Marcus Nyström. 2016. One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms. Behavior Research Methods (2016).Google Scholar
    2. James Anliker. 1976. Eye movement: On-line measurement, analysis, and control. R. A. Monty & J. W. Senders (Eds.), Eye movements and psychological processes (1976), 185–202.Google Scholar
    3. A Terry Bahill, Michael R Clark, and Lawrence Stark. 1975. The main sequence, a tool for studying human eye movements. Mathematical Biosciences 24, 3–4 (1975), 191–204.Google ScholarCross Ref
    4. Martin S Banks, Allison B Sekuler, and Stephen J Anderson. 1991. Peripheral spatial vision: Limits imposed by optics, photoreceptors, and receptor pooling. J. Opt. Soc. Am. 8, 11 (1991), 1775–1787.Google ScholarCross Ref
    5. Clara Bodelón, Mazyar Fallah, and John H. Reynolds. 2007. Temporal resolution for the perception of features and conjunctions. Journal of Neuroscience 27, 4 (2007), 725–730. Google ScholarCross Ref
    6. D Boghen, BT Troost, RB Daroff, LF Dell’Osso, and JE Birkett. 1974. Velocity characteristics of normal human saccades. Invest Ophthalmology & Vis Science 13, 8 (1974), 619–623.Google Scholar
    7. E. Bollen, J.Bax, J.G. Van Dijk, M. Koning, J.E. Bos, C.G. Kramer, and E.A. Van Der Velde. 1993. Variability of the main sequence. Invest Ophthalmology & Vis Science 34, 13 (1993), 3700–3704.Google Scholar
    8. A. Borji and L. Itti. 2013. State-of-the-art in visual attention modeling. IEEE PAMI 35, 1 (2013), 185–207. Google ScholarDigital Library
    9. Christine A Curcio, Kenneth R Sloan, Robert E Kalina, and Anita E Hendrickson. 1990. Human photoreceptor topography. Journal of Comparative Neurology 292, 4 (1990), 497–523.Google ScholarCross Ref
    10. Scott J Daly. 1998. Engineering observations from spatiovelocity and spatiotemporal visual models. In Photonics West’98 Electronic Imaging. International Society for Optics and Photonics, 180–191.Google Scholar
    11. Mark R. Diamond, John Ross, and M. C. Morrone. 2000. Extraretinal control of saccadic suppression. Journal of Neuroscience 20, 9 (2000), 3449–3455.Google ScholarCross Ref
    12. Michael Dorr, Thomas Martinetz, Karl R. Gegenfurtner, and Erhardt Barth. 2010. Variability of eye movements when viewing dynamic natural scenes. Journal of Vision 10, 10 (2010), 28. arXiv:/data/journals/jov/932797/jov-10-10-28.pdf Google ScholarCross Ref
    13. Mark H. Draper, Erik S. Viirre, Thomas A. Furness, and Valerie J. Gawron. 2001. Effects of image scale and system time delay on simulator sickness within head-coupled virtual environments. Human Factors 43, 1 (2001), 129–146. Google ScholarCross Ref
    14. Andrew T Duchowski, David Bate, Paris Stringfellow, Kaveri Thakur, Brian J. Melloy, and Anand K. Gramopadhye. 2009. On spatiochromatic visual sensitivity and peripheral color LOD management. ACM Trans. Appl. Percept. 6, 2 (2009), 9:1–9:18.Google ScholarDigital Library
    15. Andrew T. Duchowski, Donald H. House, Jordan Gestring, Rui I. Wang, Krzysztof Krejtz, Izabela Krejtz, Radosław Mantiuk, and Bartosz Bazyluk. 2014. Reducing visual discomfort of 3D stereoscopic sisplays with gaze-contingent depth-of-field. In Proc. ACM Symp. on Appl. Perc. (SAP). 39–46.Google ScholarDigital Library
    16. LH Frank, JG Casali, and WW Wierwille. 1988. Effects of visual display and motion system delays on operator performance and uneasiness in a driving simulator. Human Factors 30, 2 (1988), 201–217. Google ScholarDigital Library
    17. O-J Grüsser and U Grüsser-Cornehls. 1986. Physiology of vision. In Fundamentals of Sensory Physiology. 144–198. Google ScholarCross Ref
    18. Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, and John Snyder. 2012. Foveated 3D graphics. ACM Trans Graph (Proc SIGGRAPH Asia) 31, 6 (2012), 164.Google ScholarDigital Library
    19. Peng Han, Daniel R Saunders, Russell L Woods, and Gang Luo. 2013. Trajectory prediction of saccadic eye movements using a compressed exponential model. Journal of Vision 13, 8 (2013), 27–27. Google ScholarCross Ref
    20. Philippe Hanhart and Touradj Ebrahimi. 2014. Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience. In Proc. SPIE vol. 9011.Google Scholar
    21. Kenneth Holmqvist, Marcus Nyström, Richard Andersson, Richard Dewhurst, Halszka Jarodzka, and Joost Van de Weijer. 2011. Eye tracking: A comprehensive guide to methods and measures. Oxford University Press.Google Scholar
    22. David Jacobs, Orazio Gallo, Emily Cooper, Kari Pulli, and Marc Levoy. 2015. Simulating the visual experience of very bright and very dark scenes. ACM Trans Graph (TOG) 34, 3 (2015), 25.Google ScholarDigital Library
    23. Harish Katti, Anoop Kolar Rajagopal, Mohan Kankanhalli, and Ramakrishnan Kalpathi. 2014. Online estimation of evolving human visual interest. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 11, 1 (2014), 8.Google Scholar
    24. Petr Kellnhofer, Piotr Didyk, Karol Myszkowski, Mohamed M Hefeeda, Hans-Peter Seidel, and Wojciech Matusik. 2016. GazeStereo3D: Seamless disparity manipulations. ACM Trans Graph (Proc SIGGRAPH) 35, 4 (2016), 68.Google ScholarDigital Library
    25. Oleg V Komogortsev and Javed I Khan. 2009. Eye movement prediction by oculomotor plant Kalman filter with brainstem control. Journal of Control Theory and Applications 7, 1 (2009), 14–22. Google ScholarCross Ref
    26. Oleg V Komogortsev, Young Sam Ryu, Do Hyong Koh, and Sandeep M Gowda. 2009a. Instantaneous saccade driven eye gaze interaction. In Proceedings of the International Conference on Advances in Computer Enterntainment Technology. ACM, 140–147.Google ScholarDigital Library
    27. Oleg V Komogortsev, Young Sam Ryu, San Marcos, and Do Hyong Koh. 2009b. Quick models for saccade amplitude prediction. Journal of Eye Movement Research 3, 1 (2009).Google Scholar
    28. Eileen Kowler. 2011. Eye movements: The past 25 years. Vision Research 51, 13 (2011), 1457–1483. Google ScholarCross Ref
    29. R John Leigh and David S Zee. 2015. The neurology of eye movements. Vol. 90. Oxford University Press, USA.Google Scholar
    30. Lester C Loschky and Gary S Wolverton. 2007. How late can you update gaze-contingent multiresolutional displays without detection? ACM Trans Multimedia Comput, Comm, and Appl (TOMM) 3, 4 (2007), 25:10.Google Scholar
    31. Radosław Mantiuk, Bartosz Bazyluk, and Anna Tomaszewska. 2011. Gaze-dependent depth-of-field effect rendering in virtual environments. In Int Conf on Serious Games Dev & Appl. 1–12. Google ScholarDigital Library
    32. Michael Mauderer, Simone Conte, Miguel A. Nacenta, and Dhanraj Vishwanath. 2014. Depth perception with gaze-contingent depth of field. In Proc Human Fact in Comp Sys (CHI). 217–226. Google ScholarDigital Library
    33. Craig H Meyer, Adrian G Lasker, and David A Robinson. 1985. The upper limit of human smooth pursuit velocity. Vision research 25, 4 (1985), 561–563.Google Scholar
    34. Hunter Murphy and Andrew T. Duchowski. 2001. Gaze-contingent level of detail rendering. Eurographics Short Presentations (2001).Google Scholar
    35. Cornelis Noorlander, Jan J. Koenderink, Ron J. Den Olden, and B. Wigbold Edens. 1983. Sensitivity to spatiotemporal colour contrast in the peripheral visual field. Vision Research 23, 1 (1983), 1–11. Google ScholarCross Ref
    36. Oculus VR. 2016a. Asynchronous spacewarp. https://developer.oculus.com/blog/asynchronous-spacewarp/. (2016). Accessed: 2017-04-18.Google Scholar
    37. Oculus VR. 2016b. Asynchronous timewarp. https://developer3.oculus.com/documentation/mobilesdk/latest/concepts/mobile-timewarp-overview/. (2016).Google Scholar
    38. Céline Paeye, Alexander C Schütz, and Karl R Gegenfurtner. 2016. Visual reinforcement shapes eye movements in visual search. Journal of Vision 16, 10 (2016), 15–15. Google ScholarCross Ref
    39. Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards foveated rendering for gaze-tracked virtual reality. ACM Trans Graph (Proc SIGGRAPH Asia) 35, 6 (2016), 179.Google ScholarDigital Library
    40. Simon JD Prince and Brian J Rogers. 1998. Sensitivity to disparity corrugations in peripheral vision. Vision Res 38, 17 (1998), 2533–2537. Google ScholarCross Ref
    41. John Ross, David Burr, and Concetta Morrone. 1996. Suppression of the magnocellular pathway during saccades. Behavioural Brain Research 80, 1 (1996), 1–8. Google ScholarCross Ref
    42. John Ross, M.Concetta Morrone, Michael E Goldberg, and David C Burr. 2001. Changes in visual perception at the time of saccades. Trends in Neurosciences 24, 2 (2001), 113–121.Google ScholarCross Ref
    43. Dario D Salvucci and Joseph H Goldberg. 2000. Identifying fixations and saccades in eye-tracking protocols. In Proc. Symp. on Eye Tracking Res. and Appl. (ETRA). 71–78.Google ScholarDigital Library
    44. DR Saunders and RL Woods. 2014. Direct measurement of the system latency of gaze-contingent displays. Behavior Research Methods 46, 2 (2014), 439–447. Google ScholarCross Ref
    45. Jeroen BJ Smeets and Ignace TC Hooge. 2003. Nature of variability in saccades. Journal of Neurophysiology 90, 1 (2003), 12–20. Google ScholarCross Ref
    46. Michael Stengel, Steve Grogorick, Martin Eisemann, and Marcus Magnor. 2016. Adaptive image-space sampling for gaze-contingent real-time rendering. In Comp Graph Forum, Vol. 35. 129–139. Google ScholarDigital Library
    47. Hans Strasburger, Ingo Rentschler, and Martin Jüttner. 2011. Peripheral vision and pattern recognition: A review. Journal of Vision 11, 5 (2011), 13–13. Google ScholarCross Ref
    48. Nicholas T. Swafford, José A. Iglesias-Guitian, Charalampos Koniaris, Bochang Moon, Darren Cosker, and Kenny Mitchell. 2016. User, metric, and computational evaluation of foveated rendering methods. In Proc. ACM Symp. on Appl. Perc. (SAP). 7–14. Google ScholarDigital Library
    49. Karthik Vaidyanathan, Marco Salvi, Robert Toth, Tim Foley, Tomas Akenine-Möller, Jim Nilsson, Jacob Munkberg, Jon Hasselgren, Masamichi Sugihara, Petrik Clarberg, and others. 2014. Coarse pixel shading. In High Performance Graphics.Google Scholar
    50. AJ Van Opstal and JAM Van Gisbergen. 1987. Skewness of saccadic velocity profiles: A unifying parameter for normal and slow saccades. Vision Research 27, 5 (1987), 731–745. Google ScholarCross Ref
    51. Margarita Vinnikov and Robert S. Allison. 2014. Gaze-contingent depth of field in realistic scenes: the user experience. In Proc. Symp. on Eye Tracking Res. and Appl. (ETRA). 119–126. Google ScholarDigital Library
    52. Frances C. Volkmann, Lorrin A. Riggs, Keith D. White, and Robert K. Moore. 1978. Contrast sensitivity during saccadic eye movements. Vision Research 18, 9 (1978), 1193–1199. Google ScholarCross Ref
    53. Sang Hoon Yeo, Martin Lesmana, Debanga R Neog, and Dinesh K Pai. 2012. Eyecatch: Simulating visuomotor coordination for object interception. ACM Transactions on Graphics (TOG) 31, 4 (2012), 42.Google ScholarDigital Library
    54. Wei Zhou, Xinnian Chen, and John Enderle. 2009. An updated time-optimal 3rd-order linear saccadic eye plant model. International Journal of Neural Systems 19, 05 (2009), 309–330. Google ScholarCross Ref


ACM Digital Library Publication:



Overview Page: