“Exploration of Using Face Tracking to Reduce GPU Rendering on Current and Future Auto-Stereoscopic Displays” by Pan, Zheng and Campbell

  • ©Xingyu Pan, Mengya Zheng, and Abraham Campbell

  • ©Xingyu Pan, Mengya Zheng, and Abraham Campbell

  • ©Xingyu Pan, Mengya Zheng, and Abraham Campbell

Conference:


Type:


Entry Number: 39

Title:

    Exploration of Using Face Tracking to Reduce GPU Rendering on Current and Future Auto-Stereoscopic Displays

Presenter(s)/Author(s):



Abstract:


    Future auto-stereoscopic displays offer us an amazing possibility of virtual reality without the need for head mounted displays. Since fundamentally though we only need to generate viewpoints for known observers, the classical approach to render all views at once is wasteful in terms of GPU resources and limits the scale of an auto-stereoscopic display. We present a technique that reduces GPU consumption when using an auto-stereoscopic displays by giving the display a context awareness of its observers. The technique was first applied to the Looking Glass device on the Unity3D platform. Rather than rendering 45 different views at the same time, for each observer, the framework only requires six views that are visible to both eyes based on the tracked eye positions. Given the current specifications of this device, the framework helps save 73% GPU consumption for Looking Glass if it was to render a 8K × 8K resolution scene, and the saved GPU consumption increases as the resolution increases. This technique can be applied to reduce future GPU requirements for auto-stereoscopic displays in the future.

References:


    • Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards foveated rendering for gazetracked virtual reality. ACM Transactions on Graphics (TOG) 35, 6 (2016), 179.

Additional Images:

©Xingyu Pan, Mengya Zheng, and Abraham Campbell ©Xingyu Pan, Mengya Zheng, and Abraham Campbell

PDF:



Overview Page: