“A perceptual model of motion quality for rendering with adaptive refresh-rate and resolution” by Denes, Jindal, Mikhailiuk and Mantiuk

  • ©Gyorgy Denes, Akshay Jindal, Aliaksei Mikhailiuk, and Rafal K. Mantiuk




    A perceptual model of motion quality for rendering with adaptive refresh-rate and resolution

Session/Category Title: Monte Carlo and Perception



    Limited GPU performance budgets and transmission bandwidths mean that real-time rendering often has to compromise on the spatial resolution or temporal resolution (refresh rate). A common practice is to keep either the resolution or the refresh rate constant and dynamically control the other variable. But this strategy is non-optimal when the velocity of displayed content varies. To find the best trade-off between the spatial resolution and refresh rate, we propose a perceptual visual model that predicts the quality of motion given an object velocity and predictability of motion. The model considers two motion artifacts to establish an overall quality score: non-smooth (juddery) motion, and blur. Blur is modeled as a combined effect of eye motion, finite refresh rate and display resolution. To fit the free parameters of the proposed visual model, we measured eye movement for predictable and unpredictable motion, and conducted psychophysical experiments to measure the quality of motion from 50 Hz to 165 Hz. We demonstrate the utility of the model with our on-the-fly motion-adaptive rendering algorithm that adjusts the refresh rate of a G-Sync-capable monitor based on a given rendering budget and observed object motion. Our psychophysical validation experiments demonstrate that the proposed algorithm performs better than constant-refresh-rate solutions, showing that motion-adaptive rendering is an attractive technique for driving variable-refresh-rate displays.


    1. Kaan Akşit, Praneeth Chakravarthula, Kishore Rathinavel, Youngmo Jeong, Rachel Albert, Henry Fuchs, and David Luebke. 2019. Manufacturing Application-Driven Foveated Near-Eye Displays. IEEE transactions on visualization and computer graphics 25, 5 (2019), 1928–1939.Google ScholarCross Ref
    2. C. Anthes, R. Garcia-Hernandez, M. Wiedemann, and D. Kranzlmuller. 2016. State of the art of virtual reality technology. In 2016 IEEE Aerospace Conference. IEEE, 1–19. Google ScholarCross Ref
    3. Tunç Ozan Aydin, Martin Čadík, Karol Myszkowski, and Hans-Peter Seidel. 2010. Video Quality Assessment for Computer Graphics Applications. ACM Trans. Graph. 29, 6, Article 161 (Dec. 2010), 12 pages. Google ScholarDigital Library
    4. Amin Banitalebi-Dehkordi, Mahsa T Pourazad, and Panos Nasiopoulos. 2015. The effect of frame rate on 3D video quality and bitrate. 3D Research 6, 1 (2015), 1.Google Scholar
    5. P. G. J. Barten. 2003. Formula for the contrast sensitivity of the human eye, Yoichi Miyake and D. Rene Rasmussen (Eds.). 231–238. Google ScholarCross Ref
    6. Tom Beigbeder, Rory Coughlan, Corey Lusher, John Plunkett, Emmanuel Agu, and Mark Claypool. 2004. The effects of loss and latency on user performance in unreal tournament 2003®. In Proceedings of 3rd ACM SIGCOMM workshop on Network and system support for games. ACM, 144–151.Google ScholarDigital Library
    7. Alexandre Chapiro, Robin Atkins, and Scott Daly. 2019. A Luminance-aware Model of Judder Perception. ACM Transactions on Graphics (TOG) 38, 5 (2019), 1–10.Google ScholarDigital Library
    8. K. T. Claypool and M. Claypool. 2007. On frame rate and player performance in first person shooter games. Multimedia Systems 13, 1 (jul 2007), 3–17. Google ScholarDigital Library
    9. M. Claypool and K. Claypool. 2009. Perspectives, frame rates and resolutions. In Proceedings of the 4th International Conference on Foundations of Digital Games – FDG ’09. ACM Press, New York, New York, USA, 42. Google ScholarDigital Library
    10. S. Daly, N. Xu, J. Crenshaw, and V. J. Zunjarrao. 2015. A Psychophysical Study Exploring Judder Using Fundamental Signals and Complex Imagery. SMPTE Motion Imaging Journal 124, 7 (October 2015), 62–70. Google ScholarCross Ref
    11. S. J. Daly. 1998. Engineering observations from spatiovelocity and spatiotemporal visual models, Bernice E. Rogowitz and Thrasyvoulos N. Pappas (Eds.). 180–191. Google ScholarCross Ref
    12. James Davis, Yi-Hsuan Hsieh, and Hung-Chi Lee. 2015. Humans perceive flicker artifacts at 500 Hz. Scientific reports 5 (2015), 7861.Google Scholar
    13. K. Debattista, K. Bugeja, S. Spina, T. Bashford-Rogers, and V. Hulusic. 2018. Frame Rate vs Resolution: A Subjective Evaluation of Spatiotemporal Perceived Quality Under Varying Computational Budgets. Computer Graphics Forum 37, 1 (feb 2018), 363–374. Google ScholarCross Ref
    14. G. Denes, K. Maruszczyk, G. Ash, and R. K. Mantiuk. 2019. Temporal Resolution Multiplexing: Exploiting the limitations of spatio-temporal vision for more efficient VR rendering. IEEE Transactions on Visualization and Computer Graphics 25, 5 (May 2019), 2072–2082. Google ScholarCross Ref
    15. Piotr Didyk, Elmar Eisemann, Tobias Ritschel, Karol Myszkowski, and Hans-Peter Seidel. 2010. Apparent display resolution enhancement for moving images. In ACM Transactions on Graphics (TOG), Vol. 29. ACM, 113.Google ScholarDigital Library
    16. E. DoVale. 2017. High Frame Rate Psychophysics: Experimentation to Determine a JND for Frame Rate. SMPTE Motion Imaging Journal 126, 9 (nov 2017), 41–47. Google ScholarCross Ref
    17. David B Elliott, KC Yang, and David Whitaker. 1995. Visual acuity changes throughout adulthood in normal, healthy eyes: seeing beyond 6/6. Optometry and vision science: official publication of the American Academy of Optometry 72, 3 (1995), 186–191.Google Scholar
    18. Trey Greer, Josef Spjut, David Luebke, and Turner Whitted. 2016. 8-3: Hybrid Modulation for Near Zero Display Latency. In SID Symposium Digest of Technical Papers, Vol. 47. Wiley Online Library, 76–78.Google ScholarCross Ref
    19. Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, and John Snyder. 2012. Foveated 3D graphics. ACM Transactions on Graphics (TOG) 31, 6 (2012), 164.Google ScholarDigital Library
    20. ITU-R. 2016. Subjective assessment methods for 3D video quality. ITU-R Recommendation P.915.Google Scholar
    21. Anton Kaplanyan, Anton Sochenov, Thomas Leimkuehler, Mikhail Okunev, Todd Goodall, and Gizem Rufo. 2019. DeepFovea: Neural Reconstruction for Foveated Rendering and Video Compression using Learned Statistics of Natural Videos. ACM Trans. Graph. (Proc. SIGGRAPH Asia) (2019).Google ScholarDigital Library
    22. D. H. Kelly. 1979. Motion and vision II Stabilized spatio-temporal threshold surface. Journal of the Optical Society of America 69, 10 (oct 1979), 1340. Google ScholarCross Ref
    23. Woojae Kim, Jongyoo Kim, Sewoong Ahn, Jinwoo Kim, and Sanghoon Lee. 2018. Deep video quality assessor: From spatio-temporal visual sensitivity to a convolutional neural aggregation network. In Proceedings of the European Conference on Computer Vision (ECCV). 219–234.Google ScholarDigital Library
    24. Michiel A Klompenhouwer and Leo Jan Velthoven. 2004. Motion blur reduction for liquid crystal displays: motion-compensated inverse filtering. In Visual Communications and Image Processing 2004, Vol. 5308. International Society for Optics and Photonics, 690–699.Google Scholar
    25. Y. Kuroki, T. Nishi, S. Kobayashi, H. Oyaizu, and S. Yoshimura. 2006. 3.4: Improvement of Motion Image Quality by High Frame Rate. SID Symposium Digest of Technical Papers 37, 1 (2006), 14. Google ScholarCross Ref
    26. Y. Kuroki, T. Nishi, S. Kobayashi, H. Oyaizu, and S. Yoshimura. 2007. A psychophysical study of improvements in motion-image quality by using high frame rates. Journal of the Society for Information Display 15, 1 (2007), 61. Google ScholarCross Ref
    27. S. G. Lisberger. 2010. Visual Guidance of Smooth-Pursuit Eye Movements: Sensation, Action, and What Happens in Between. Neuron 66, 4 (may 2010), 477–491. Google ScholarCross Ref
    28. Jing Liu, Soja-Marie Morgens, Robert C Sumner, Luke Buschmann, Yu Zhang, and James Davis. 2014. When does the hidden butterfly not flicker?. In SIGGRAPH Asia 2014 Technical Briefs. ACM, 3.Google Scholar
    29. A. Mackin, K. C. Noland, and D. R. Bull. 2016. The visibility of motion artifacts and their effect on motion quality. In 2016 IEEE International Conference on Image Processing (ICIP). IEEE, 2435–2439. Google ScholarCross Ref
    30. Belen Masia, Gordon Wetzstein, Piotr Didyk, and Diego Gutierrez. 2013. A survey on computational displays: Pushing the boundaries of optics, computation, and perception. Computers & Graphics 37, 8 (2013), 1012–1038.Google ScholarDigital Library
    31. J. D. McCarthy, M. A. Sasse, and D. Miras. 2004. Sharp or smooth?. In Proceedings of the 2004 conference on Human factors in computing systems – CHI ’04. ACM Press, New York, New York, USA, 535–542. Google ScholarDigital Library
    32. Aliaksei Mikhailiuk, Clifford Wilmot, Maria Perez-Ortiz, Dingcheng Yue, and Rafal Mantiuk. 2020. Active Sampling for Pairwise Comparisons via Approximate Message Passing and Information Gain Maximization. arXiv:cs.LG/2004.05691Google Scholar
    33. K. P. Murphy. 2012. Machine Learning: A Probabilistic Perspective (1 ed.). MIT Press.Google Scholar
    34. Fernando Navarro, Susana Castillo, Francisco J Serón, and Diego Gutierrez. 2011. Perceptual considerations for motion blur rendering. ACM Transactions on Applied Perception (TAP) 8, 3 (2011), 1–15.Google ScholarDigital Library
    35. D. C Niehorster, W. F. Siu, and L. Li. 2015. Manual tracking enhances smooth pursuit eye movements. Journal of vision (2015). Google ScholarCross Ref
    36. S. Niklaus and F. Liu. 2018. Context-Aware Synthesis for Video Frame Interpolation. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 1701–1710. Google ScholarCross Ref
    37. K Noland. 2014. The application of sampling theory to television frame rate requirements. BBC Research & Development White Paper 282 (2014).Google Scholar
    38. Yen-Fu Ou, Zhan Ma, Tao Liu, and Yao Wang. 2010. Perceptual quality assessment of video considering both frame rate and quantization artifacts. IEEE Transactions on Circuits and Systems for Video Technology 21, 3 (2010), 286–298.Google ScholarDigital Library
    39. Fabio Pellacini. 2005. User-configurable automatic shader simplification. ACM Transactions on Graphics (TOG) 24, 3 (2005), 445–452.Google ScholarDigital Library
    40. M. Perez-Ortiz and R. K. Mantiuk. 2017. A practical guide and software for analysing pairwise comparison experiments. arXiv:1712.03686v2 (2017).Google Scholar
    41. JE Roberts and AJ Wilkins. 2013. Flicker can be perceived during saccades at frequencies in excess of 1 kHz. Lighting Research & Technology 45, 1 (2013), 124–132.Google ScholarCross Ref
    42. D. A. Robinson. 1964. The mechanics of human saccadic eye movement. The Journal of Physiology (1964). Google ScholarCross Ref
    43. D A Robinson. 1965. The mechanics of human smooth pursuit eye movement. The Journal of Physiology 180, 3 (oct 1965), 569–591. Google ScholarCross Ref
    44. Andrew Rutherford. 2003. Handbook of perception and human performance. Vol 1: Sensory processes and perception. Vol 2: Cognitive processes and performance. In Applied Ergonomics. Vol. 18. Chapter 6, 340. Google ScholarCross Ref
    45. D. Scherzer, L. Yang, O. Mattausch, D. Nehab, P. V. Sander, M. Wimmer, and E. Eisemann. 2012. Temporal coherence methods in real-time rendering. Computer Graphics Forum 31, 8 (2012), 2378–2408. Google ScholarDigital Library
    46. E. Simonson and J. Brozek. 1952. Flicker Fusion Frequency: Background and Applications. Physiological Reviews 32, 3 (jul 1952), 349–378. Google ScholarCross Ref
    47. L. Stark, G. Vossius, and L. R. Young. 1962. Predictive Control of Eye Tracking Movements. IRE Transactions on Human Factors in Electronics HFE-3, 2 (sep 1962), 52–57. Google ScholarCross Ref
    48. M. Suh, R. Kolster, R. Sarkar, B. McCandliss, and J. Ghajar. 2006. Deficits in predictive smooth pursuit after mild traumatic brain injury. Neuroscience Letters 401, 1-2 (jun 2006), 108–113. Google ScholarCross Ref
    49. K. Templin, P. Didyk, K. Myszkowski, and H. Seidel. 2016. Emulating displays with continuously varying frame rates. ACM Transactions on Graphics 35, 4 (jul 2016), 1–11. Google ScholarDigital Library
    50. Krzysztof Templin, Piotr Didyk, Tobias Ritschel, Elmar Eisemann, Karol Myszkowski, and Hans-Peter Seidel. 2011. Apparent resolution enhancement for animations. In Proceedings of the 27th Spring Conference on Computer Graphics. ACM, 57–64.Google ScholarDigital Library
    51. L. Thaler, A.C. Schütz, M.A. Goodale, and K.R. Gegenfurtner. 2013. What is the best fixation target? The effect of target shape on stability of fixational eye movements. Vision Research 76 (jan 2013), 31–42. Google ScholarCross Ref
    52. L. L. Thurstone. 1927. A law of comparative judgment. Psychological Review 34, 4 (1927), 273–286. Google ScholarCross Ref
    53. S. Tourancheau, P. Le Callet, K. Brunnström, and B. Andrén. 2009. Psychophysical study of LCD motion-blur perception, Bernice E. Rogowitz and Thrasyvoulos N. Pappas (Eds.). 724015. Google ScholarCross Ref
    54. A. B. Watson. 2015. High Frame Rates and Human Vision: A View through the Window of Visibility. SMPTE Motion Imaging Journal (2015). Google ScholarCross Ref
    55. Andrew B Watson and Albert J Ahumada. 2011. Blur clarified: A review and synthesis of blur discrimination. Journal of Vision 11, 5 (2011), 10–10.Google ScholarCross Ref
    56. Andrew B. Watson, Albert J. Ahumada, and Joyce E. Farrell. 2008. Window of visibility: a psychophysical theory of fidelity in time-sampled visual motion displays. Journal of the Optical Society of America A (2008). Google ScholarCross Ref
    57. Hector Yee, Sumanita Pattanaik, and Donald P Greenberg. 2001. Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments. ACM Transactions on Graphics (TOG) 20, 1 (2001), 39–65.Google ScholarDigital Library

ACM Digital Library Publication:

Overview Page: