“Image features influence reaction time: a learned probabilistic perceptual model for saccade latency” by Duinkharjav, Chakravarthula, Brown, Patney and Sun

  • ©Budmonde Duinkharjav, Praneeth Chakravarthula, Rachel Brown, Anjul Patney, and Qi Sun

Conference:


Type:


Title:

    Image features influence reaction time: a learned probabilistic perceptual model for saccade latency

Presenter(s)/Author(s):



Abstract:


    We aim to ask and answer an essential question “how quickly do we react after observing a displayed visual target?” To this end, we present psychophysical studies that characterize the remarkable disconnect between human saccadic behaviors and spatial visual acuity. Building on the results of our studies, we develop a perceptual model to predict temporal gaze behavior, particularly saccadic latency, as a function of the statistics of a displayed image. Specifically, we implement a neurologically-inspired probabilistic model that mimics the accumulation of confidence that leads to a perceptual decision. We validate our model with a series of objective measurements and user studies using an eye-tracked VR display. The results demonstrate that our model prediction is in statistical alignment with real-world human behavior. Further, we establish that many sub-threshold image modifications commonly introduced in graphics pipelines may significantly alter human reaction timing, even if the differences are visually undetectable. Finally, we show that our model can serve as a metric to predict and alter reaction latency of users in interactive computer graphics applications, thus may improve gaze-contingent rendering, design of virtual experiences, and player performance in e-sports. We illustrate this with two examples: estimating competition fairness in a video game with two different team colors, and tuning display viewing distance to minimize player reaction time.

References:


    1. Rachel Albert, Anjul Patney, David Luebke, and Joohwan Kim. 2017. Latency Requirements for Foveated Rendering in Virtual Reality. ACM Transactions on Applied Perception 14, 4, Article 25 (sep 2017), 13 pages. Google ScholarDigital Library
    2. Elena Arabadzhiyska, Okan Tarhan Tursun, Karol Myszkowski, Hans-Peter Seidel, and Piotr Didyk. 2017. Saccade Landing Position Prediction for Gaze-Contingent Rendering. ACM Trans. Graph. 36, 4, Article 50 (July 2017), 12 pages. Google ScholarDigital Library
    3. A. Terry Bahill, Michael R. Clark, and Lawrence Stark. 1975. The main sequence, a tool for studying human eye movements. Mathematical Biosciences 24, 3 (1975), 191–204. Google ScholarCross Ref
    4. A Terry Bahill. 1975. Most naturally occurring human saccades have magnitudes of 15 deg or less. Invest. Ophthalmol 14 (1975), 468–469.Google Scholar
    5. Reynold Bailey, Ann McNamara, Nisha Sudarsanam, and Cindy Grimm. 2009. Subtle Gaze Direction. ACM Trans. Graph. 28, 4, Article 100 (Sept. 2009), 14 pages. Google ScholarDigital Library
    6. Peter GJ Barten. 1999. Contrast sensitivity of the human eye and its effects on image quality. SPIE press.Google Scholar
    7. W. Becker and A.F. Fuchs. 1969. Further properties of the human saccadic system: Eye movements and correction saccades with and without visual fixation points. Vision Research 9, 10 (1969), 1247–1258. Google ScholarCross Ref
    8. AH Bell, MA Meredith, AJ Van Opstal, and DougP Munoz. 2006. Stimulus intensity modifies saccadic reaction time and visual response latency in the superior colliculus. Experimental Brain Research 174, 1 (2006), 53–59.Google ScholarCross Ref
    9. DC Burr, MC Morrone, and J Ross. 1994. Selective suppression of the magnocellular visual pathway during saccadic eye movements. Nature 371, 6497 (1994), 511–513. Google ScholarCross Ref
    10. Anke Cajar, Ralf Engbert, and Jochen Laubrock. 2016. Spatial frequency processing in the central and peripheral visual field during scene viewing. Vision Research 127 (2016), 186–197.Google ScholarCross Ref
    11. RHS Carpenter. 2004. Contrast, probability, and saccadic latency: evidence for independence of detection and decision. Current Biology 14, 17 (2004), 1576–1580.Google ScholarCross Ref
    12. Roger HS Carpenter and MLL Williams. 1995. Neural computation of log likelihood in control of saccadic eye movements. Nature 377, 6544 (1995), 59–62.Google Scholar
    13. Haoyang Chen, Yasukuni Mori, and Ikuo Matsuba. 2014. Solving the balance problem of massively multiplayer online role-playing games using coevolutionary programming. Applied Soft Computing 18 (2014), 1–11.Google ScholarCross Ref
    14. Shaoyu Chen, Budmonde Duinkharjav, Xin Sun, Li-Yi Wei, Stefano Petrangeli, Jose Echevarria, Claudio Silva, and Qi Sun. 2022. Instant Reality: Gaze-Contingent Perceptual Optimization for 3D Virtual Reality Streaming. IEEE Transactions on Visualization and Computer Graphics 28, 5 (2022), 2157–2167. Google ScholarCross Ref
    15. Michael A. Cohen, Thomas L. Botch, and Caroline E. Robertson. 2020. The limits of color awareness during active, real-world vision. Proceedings of the National Academy of Sciences 117, 24 (2020), 13821–13827. arXiv:https://www.pnas.org/content/117/24/13821.full.pdf Google ScholarCross Ref
    16. Julien Cotti, Muriel Panouilleres, Douglas P Munoz, Jean-Louis Vercher, Denis Pélisson, and Alain Guillaume. 2009. Adaptation of reactive and voluntary saccades: different patterns of adaptation revealed in the antisaccade task. The Journal of Physiology 587, 1 (2009), 127–138.Google ScholarCross Ref
    17. Scott J Daly. 1992. Visible differences predictor: an algorithm for the assessment of image fidelity. In Human Vision, Visual Processing, and Digital Display III, Vol. 1666. International Society for Optics and Photonics, 2–15.Google Scholar
    18. H. Deubel, W. Wolf, and G. Hauske. 1982. Corrective saccades: Effect of shifting the saccade goal. Vision Research 22, 3 (1982), 353–364. Google ScholarCross Ref
    19. Mark R. Diamond, John Ross, and M. C. Morrone. 2000. Extraretinal Control of Saccadic Suppression. Journal of Neuroscience 20, 9 (2000), 3449–3455. arXiv:https://www.jneurosci.org/content/20/9/3449.full.pdf Google ScholarCross Ref
    20. Andrew T. Duchowski, Donald H. House, Jordan Gestring, Rui I. Wang, Krzysztof Krejtz, Izabela Krejtz, Radosław Mantiuk, and Bartosz Bazyluk. 2014. Reducing Visual Discomfort of 3D Stereoscopic Displays with Gaze-Contingent Depth-of-Field (SAP ’14). Association for Computing Machinery, New York, NY, USA, 39–46. Google ScholarDigital Library
    21. David Dunn, Okan Tursun, Hyeonseung Yu, Piotr Didyk, Karol Myszkowski, and Henry Fuchs. 2020. Stimulating the Human Visual System Beyond Real World Performance in Future Augmented Reality Displays. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 90–100.Google Scholar
    22. Ralf Engbert and Konstantin Mergenthaler. 2006. Microsaccades are triggered by low retinal image slip. Proceedings of the National Academy of Sciences 103, 18 (2006), 7192–7197.Google ScholarCross Ref
    23. Jasper H Fabius, Alessio Fracasso, Tanja CW Nijboer, and Stefan Van der Stigchel. 2019. Time course of spatiotopic updating across saccades. Proceedings of the National Academy of Sciences 116, 6 (2019), 2027–2032.Google ScholarCross Ref
    24. J Leroy Folks and Raj S Chhikara. 1978. The inverse Gaussian distribution and its statistical application—a review. Journal of the Royal Statistical Society: Series B (Methodological) 40, 3 (1978), 263–275.Google ScholarCross Ref
    25. Linus Franke, Laura Fink, Jana Martschinke, Kai Selgrad, and Marc Stamminger. 2021. Time-Warped Foveated Rendering for Virtual Reality Headsets. Computer Graphics Forum 40, 1 (2021), 110–123. arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14176 Google ScholarCross Ref
    26. Drew Fudenberg, Whitney Newey, Philipp Strack, and Tomasz Strzalecki. 2020. Testing the drift-diffusion model. Proceedings of the National Academy of Sciences 117, 52 (2020), 33141–33148.Google ScholarCross Ref
    27. Ramanathan Gnanadesikan and Martin B Wilk. 1968. Probability plotting methods for the analysis of data. Biometrika 55, 1 (1968), 1–17.Google Scholar
    28. Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, and John Snyder. 2012. Foveated 3D Graphics. ACM Transactions on Graphics 31, 6, Article 164 (nov 2012), 10 pages. Google ScholarDigital Library
    29. E. Hartmann, B. Lachenmayr, and H. Brettel. 1979. The peripheral critical flicker frequency. Vision Research 19, 9 (1979), 1019–1023. Google ScholarCross Ref
    30. Toyohiko Hatada, Haruo Sakata, and Hideo Kusaka. 1980. Psychophysical analysis of the “sensation of reality” induced by a visual wide-field display. Smpte Journal 89, 8 (1980), 560–569.Google ScholarCross Ref
    31. Sebastien Hillaire, Anatole Lecuyer, Remi Cozot, and Gery Casiez. 2008. Using an Eye-Tracking System to Improve Camera Motions and Depth-of-Field Blur Effects in Virtual Environments. In 2008 IEEE Virtual Reality Conference. 47–50. Google ScholarCross Ref
    32. Alain Hore and Djemel Ziou. 2010. Image quality metrics: PSNR vs. SSIM. In 2010 20th international conference on pattern recognition. IEEE, 2366–2369.Google ScholarDigital Library
    33. Michael R. Ibbotson and Shaun L. Cloherty. 2009. Visual Perception: Saccadic Omission—Suppression or Temporal Masking? Current Biology 19, 12 (2009), R493–R496. Google ScholarCross Ref
    34. Sirkka L Jarvenpaa. 1990. Graphic displays in decision making—the visual salience effect. Journal of Behavioral Decision Making 3, 4 (1990), 247–262.Google ScholarCross Ref
    35. Akshay Jindal, Krzysztof Wolski, Karol Myszkowski, and Rafał K Mantiuk. 2021. Perceptual model for adaptive local shading and refresh rate. ACM Transactions on Graphics (TOG) 40, 6 (2021), 1–18.Google ScholarDigital Library
    36. RP Kalesnykas and PE Hallett. 1994. Retinal eccentricity and the latency of eye saccades. Vision research 34, 4 (1994), 517–531.Google Scholar
    37. Anton S Kaplanyan, Anton Sochenov, Thomas Leimkühler, Mikhail Okunev, Todd Goodall, and Gizem Rufo. 2019. DeepFovea: neural reconstruction for foveated rendering and video compression using learned statistics of natural videos. ACM Transactions on Graphics (TOG) 38, 6 (2019), 1–13.Google ScholarDigital Library
    38. Donald H Kelly. 1979. Motion and vision. II. Stabilized spatio-temporal threshold surface. Josa 69, 10 (1979), 1340–1349.Google ScholarCross Ref
    39. Joohwan Kim, Josef Spjut, Morgan McGuire, Alexander Majercik, Ben Boudaoud, Rachel Albert, and David Luebke. 2019. Esports arms race: Latency and refresh rate for competitive gaming tasks. Journal of Vision 19, 10 (2019), 218c–218c.Google ScholarCross Ref
    40. Robert Konrad, Anastasios Angelopoulos, and Gordon Wetzstein. 2020. Gaze-Contingent Ocular Parallax Rendering for Virtual Reality. ACM Trans. Graph. 39 (2020). Issue 2.Google Scholar
    41. Denis Koposov, Maria Semenova, Andrey Somov, Andrey Lange, Anton Stepanov, and Evgeny Burnaev. 2020. Analysis of the reaction time of esports players through the gaze tracking and personality trait. In 2020 IEEE 29th International Symposium on Industrial Electronics (ISIE). IEEE, 1560–1565.Google ScholarCross Ref
    42. Matias Koskela, Atro Lotvonen, Markku Mäkitalo, Petrus Kivi, Timo Viitanen, and Pekka Jääskeläinen. 2019. Foveated real-time path tracing in visual-polar space. In Eurographics Symposium on Rendering. The Eurographics Association.Google Scholar
    43. Matias Koskela, Timo Viitanen, Pekka Jääskeläinen, and Jarmo Takala. 2016. Foveated path tracing. In International Symposium on Visual Computing. Springer, 723–732.Google ScholarCross Ref
    44. Eileen Kowler. 2011. Eye movements: The past 25years. Vision Research 51, 13 (2011), 1457–1483. Vision Research 50th Anniversary Issue: Part 2. Google ScholarCross Ref
    45. Brooke Krajancich, Petr Kellnhofer, and Gordon Wetzstein. 2020. Optimizing depth perception in virtual and augmented reality through gaze-contingent stereo rendering. ACM Transactions on Graphics (TOG) 39, 6 (2020), 1–10.Google ScholarDigital Library
    46. Brooke Krajancich, Petr Kellnhofer, and Gordon Wetzstein. 2021. A Perceptual Model for Eccentricity-dependent Spatio-temporal Flicker Fusion and its Applications to Foveated Graphics. ACM Trans. Graph. 40 (2021). Issue 4.Google Scholar
    47. Matteo Lisi, Joshua A. Solomon, and Michael J. Morgan. 2019. Gain control of saccadic eye movements is probabilistic. Proceedings of the National Academy of Sciences 116, 32 (2019), 16137–16142. arXiv:https://www.pnas.org/content/116/32/16137.full.pdf Google ScholarCross Ref
    48. Casimir JH Ludwig, J Rhys Davies, and Miguel P Eckstein. 2014. Foveal analysis and peripheral selection during active visual sampling. Proceedings of the National Academy of Sciences 111, 2 (2014), E291–E299.Google ScholarCross Ref
    49. Madhumitha S Mahadevan, Harold E Bedell, and Scott B Stevenson. 2018. The influence of endogenous attention on contrast perception, contrast discrimination, and saccadic reaction time. Vision research 143 (2018), 89–102.Google Scholar
    50. Rafal Mantiuk, Grzegorz Krawczyk, Karol Myszkowski, and Hans-Peter Seidel. 2004. Perception-motivated high dynamic range video encoding. ACM Transactions on Graphics (TOG) 23, 3 (2004), 733–741.Google ScholarDigital Library
    51. Rafał K Mantiuk, Gyorgy Denes, Alexandre Chapiro, Anton Kaplanyan, Gizem Rufo, Romain Bachy, Trisha Lian, and Anjul Patney. 2021. FovVideoVDP: A visible difference predictor for wide field-of-view video. ACM Transactions on Graphics (TOG) 40, 4 (2021), 1–19.Google ScholarDigital Library
    52. Frank J Massey Jr. 1951. The Kolmogorov-Smirnov test for goodness of fit. Journal of the American statistical Association 46, 253 (1951), 68–78.Google ScholarCross Ref
    53. Ethel Matin. 1975. Saccadic suppression: A review and an analysis. Psychological bulletin 81 (01 1975), 899–917. Google ScholarCross Ref
    54. Michael Mauderer, Simone Conte, Miguel A. Nacenta, and Dhanraj Vishwanath. 2014. Depth Perception with Gaze-Contingent Depth of Field. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). Association for Computing Machinery, New York, NY, USA, 217–226. Google ScholarDigital Library
    55. Deepmala Mazumdar, Najiya S Kadavath Meethal, Manish Panday, Rashima Asokan, Gijs Thepass, Ronnie J George, Johannes van der Steen, and Johan JM Pel. 2019. Effect of age, sex, stimulus intensity, and eccentricity on saccadic reaction time in eye movement perimetry. Translational Vision Science & Technology 8, 4 (2019), 13–13.Google ScholarCross Ref
    56. Mark E Mazurek, Jamie D Roitman, Jochen Ditterich, and Michael N Shadlen. 2003. A role for neural integrators in perceptual decision making. Cerebral cortex 13, 11 (2003), 1257–1269.Google Scholar
    57. Suzanne P McKee and Ken Nakayama. 1984. The detection of motion in the peripheral visual field. Vision research 24, 1 (1984), 25–32.Google Scholar
    58. Xiaoxu Meng, Ruofei Du, Matthias Zwicker, and Amitabh Varshney. 2018. Kernel Foveated Rendering. Proceedings of ACM Computer Graphics and Interactive Techniques 1, 1, Article 5 (jul 2018), 20 pages. Google ScholarDigital Library
    59. Aythami Morales, Francisco M Costela, and Russell L Woods. 2021. Saccade Landing Point Prediction Based on Fine-Grained Learning Method. IEEE Access 9 (2021), 52474–52484.Google ScholarCross Ref
    60. Manon Mulckhuyse and Jan Theeuwes. 2010. Unconscious cueing effects in saccadic eye movements-Facilitation and inhibition in temporal and nasal hemifield. Vision Research 50, 6 (2010), 606–613.Google ScholarCross Ref
    61. Cornelis Noorlander, Jan J. Koenderink, Ron J. Den Olden, and B. Wigbold Edens. 1983. Sensitivity to spatiotemporal colour contrast in the peripheral visual field. Vision Research 23, 1 (1983), 1–11.Google ScholarCross Ref
    62. Evan M Palmer, Todd S Horowitz, Antonio Torralba, and Jeremy M Wolfe. 2011. What are the shapes of response time distributions in visual search? Journal of experimental psychology: human perception and performance 37, 1 (2011), 58.Google ScholarCross Ref
    63. John Palmer, Alexander C Huk, and Michael N Shadlen. 2005. The effect of stimulus strength on the speed and accuracy of a perceptual decision. Journal of vision 5, 5 (2005), 1–1.Google ScholarCross Ref
    64. Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards Foveated Rendering for Gaze-Tracked Virtual Reality. ACM Trans. Graph. 35, 6, Article 179 (Nov. 2016), 12 pages. Google ScholarDigital Library
    65. Andreas Polychronakis, George Alex Koulieris, and Katerina Mania. 2021. Emulating Foveated Path Tracing. In Motion, Interaction and Games. 1–9.Google Scholar
    66. Dale Purves, Roberto Cabeza, Scott A Huettel, Kevin S LaBar, Michael L Platt, Marty G Woldorff, and Elizabeth M Brannon. 2008. Cognitive neuroscience. Sunderland: Sinauer Associates, Inc.Google Scholar
    67. Roger Ratcliff. 1978. A theory of memory retrieval. Psychological review 85, 2 (1978), 59.Google Scholar
    68. Baj AJ Reddi, Kaleab N Asrress, and Roger HS Carpenter. 2003. Accuracy, information, and response time in a saccadic decision task. Journal of neurophysiology 90, 5 (2003), 3538–3546.Google ScholarCross Ref
    69. Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. 2016. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition. 779–788.Google ScholarCross Ref
    70. Snježana Rimac-Drıje, Mario Vranješ, and Drago Žagar. 2010. Foveated Mean Squared Error-a Novel Video Quality Metric. Multimedia Tools and Applications 49, 3 (sep 2010), 425–445. Google ScholarDigital Library
    71. Snježana Rimac-Drlje, Goran Martinović, and Branka Zovko-Cihlar. 2011. Foveation-based content Adaptive Structural Similarity index. In 2011 18th International Conference on Systems, Signals and Image Processing. 1–4.Google Scholar
    72. Richard Schweitzer and Martin Rolfs. 2021. Intrasaccadic motion streaks jump-start gaze correction. Science Advances 7, 30 (2021), eabf2218.Google Scholar
    73. Ana Serrano, Vincent Sitzmann, Jaime Ruiz-Borau, Gordon Wetzstein, Diego Gutierrez, and Belen Masia. 2017. Movie editing and cognitive event segmentation in virtual reality video. ACM Transactions on Graphics (TOG) 36, 4 (2017), 1–12.Google ScholarDigital Library
    74. Vincent Sitzmann, Ana Serrano, Amy Pavel, Maneesh Agrawala, Diego Gutierrez, Belen Masia, and Gordon Wetzstein. 2018. Saliency in VR: How do people explore virtual environments? IEEE transactions on visualization and computer graphics 24, 4 (2018), 1633–1642.Google Scholar
    75. Miriam Spering and Marisa Carrasco. 2015. Acting without seeing: eye movements reveal visual processing without awareness. Trends in neurosciences 38, 4 (2015), 247–258.Google Scholar
    76. Qi Sun, Fu-Chung Huang, Joohwan Kim, Li-Yi Wei, David Luebke, and Arie Kaufman. 2017. Perceptually-Guided Foveation for Light Field Displays. ACM Trans. Graph. 36, 6, Article 192 (Nov. 2017), 13 pages. Google ScholarDigital Library
    77. Qi Sun, Fu-Chung Huang, Li-Yi Wei, David Luebke, Arie Kaufman, and Joohwan Kim. 2020. Eccentricity effects on blur and depth perception. Optics express 28, 5 (2020), 6734–6739.Google Scholar
    78. Qi Sun, Anjul Patney, Li-Yi Wei, Omer Shapira, Jingwan Lu, Paul Asente, Suwen Zhu, Morgan Mcguire, David Luebke, and Arie Kaufman. 2018. Towards Virtual Reality Infinite Walking: Dynamic Saccadic Redirection. ACM Trans. Graph. 37, 4, Article 67 (July 2018), 13 pages. Google ScholarDigital Library
    79. The Manim Community Developers. 2022. Manim – Mathematical Animation Framework. https://www.manim.community/Google Scholar
    80. L.N. Thibos, D.J. Walsh, and F.E. Cheney. 1987b. Vision beyond the resolution limit: Aliasing in the periphery. Vision Research 27, 12 (1987), 2193–2197.Google ScholarCross Ref
    81. L. N. Thibos, F. E. Cheney, and D. J. Walsh. 1987a. Retinal limits to the detection and resolution of gratings. Journal of the Optical Society of America A 4, 8 (1987), 1524–1529.Google ScholarCross Ref
    82. Okan Tarhan Tursun, Elena Arabadzhiyska-Koleva, Marek Wernikowski, Radosław Mantiuk, Hans-Peter Seidel, Karol Myszkowski, and Piotr Didyk. 2019. Luminance-contrast-aware foveated rendering. ACM Transactions on Graphics (TOG) 38, 4 (2019), 1–14.Google ScholarDigital Library
    83. Robert J van Beers. 2007. The sources of variability in saccadic eye movements. Journal of Neuroscience 27, 33 (2007), 8757–8770.Google ScholarCross Ref
    84. Boris B Velichkovsky, Nikita Khromov, Alexander Korotin, Evgeny Burnaev, and Andrey Somov. 2019. Visual fixations duration as an indicator of skill level in esports. In IFIP Conference on Human-Computer Interaction. Springer, 397–405.Google ScholarDigital Library
    85. David R Walton, Rafael Kuffner Dos Anjos, Sebastian Friston, David Swapp, Kaan Akşit, Anthony Steed, and Tobias Ritschel. 2021. Beyond blur: Real-time ventral metamers for foveated rendering. ACM Transactions on Graphics 40, 4 (2021), 1–14.Google ScholarDigital Library
    86. Zhou Wang, Alan Conrad Bovik, Ligang Lu, and Jack L. Kouloheris. 2001. Foveated wavelet image quality index. In Applications of Digital Image Processing XXIV, Andrew G. Tescher (Ed.), Vol. 4472. International Society for Optics and Photonics, SPIE, 42 — 52. Google ScholarCross Ref
    87. Martin Weier, Thorsten Roth, Ernst Kruijff, André Hinkenjann, Arsène Pérard-Gayot, Philipp Slusallek, and Yongmin Li. 2016. Foveated real-time ray tracing for head-mounted displays. In Computer Graphics Forum, Vol. 35. Wiley Online Library, 289–298.Google Scholar
    88. Shimpei Yamagishi and Shigeto Furukawa. 2020. Factors Influencing Saccadic Reaction Time: Effect of Task Modality, Stimulus Saliency, Spatial Congruency of Stimuli, and Pupil Size. Frontiers in Human Neuroscience (2020), 513.Google Scholar


ACM Digital Library Publication:



Overview Page: