“IRIDiuM: Immersive Rendered Interactive Deep Media” by Koniaris, Huerta, Kosek, Darragh, Malleson, et al. …

  • ©Charalampos (Babis) Koniaris, Ivan Huerta, Maggie Kosek, Karen Darragh, Charles Malleson, Joanna Jamrozy, Nick Swafford, Jose A. Iglesias-Guitian, Bochang Moon, Ali Israr, and Kenny Mitchell


Description:


    Compelling virtual reality experiences require high quality imagery as well as head motion with six degrees of freedom. Most existing systems limit the motion of the viewer (prerecorded fixed position 360 video panoramas), or are limited in realism, e.g. video game quality graphics rendered in real-time on low powered devices. We propose a solution for presenting movie quality graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive deep media representation, we display the content in real-time according to the tracked head pose. For each frame, we generate a set of 360-degree images (colors and depths) using cameras placed in selected locations within a small view volume surrounding a central viewing position. We employ a parallax masking technique which minimizes the rendering work required for the additionally visible surfaces in viewing locations around the main viewpoint. At run-time, a decompression and rendering algorithm fetches the appropriate surface data in real-time and projects them to the eye positions as the user moves within the tracked view volume.

    To further illustrate this ability for interactivity and embodiment within VR movies, we track the full upper body using our sparse sensor motion capture solver allowing users to see themselves in the virtual world. Here, both head and upper body are tracked in realtime using data from IMU (Inertial Measurement Unit) and EMG (Electromyogram) sensors. Our real-time solver, Triduna Live uses a physics-based approach to robustly estimate pose from a few sensors. Hand gesture and object grasping motions are detected from the EMG data and combined with the tracked body position to control gameplay seamlessly integrated within the deep media environment.

References:


    1. Debevec, P., Downing, G., Bolas, M., Peng, H.-Y., and Urbach, J. 2015. Spherical light field environment capture for virtual reality using a motorized pan/tilt head and offset camera. In ACM SIGGRAPH 2015 Posters, ACM, 30. 
    2. Han, P.-H., Huang, D.-Y., Tsai, H.-R., Chen, P.-C., Hsieh, C.-H., Lu, K.-Y., Yang, D.-N., and Hung, Y.-P. 2015. Moving around in virtual space with spider silk. In ACM SIGGRAPH 2015 Emerging Technologies, ACM, New York, NY, USA, SIGGRAPH ’15, 19:1–19:1. 
    3. Kim, C., Subr, K., Mitchell, K., Sorkine-Hornung, A., and Gross, M. 2015. Online view sampling for estimating depth from light fields. In Image Processing (ICIP), 2015 IEEE International Conference on, IEEE, 1155–1159.
    4. Policarpo, F., and Oliveira, M. M. 2006. Relief mapping of non-height-field surface details. In Proceedings of the 2006 symposium on Interactive 3D graphics and games, ACM, 55–62. 
    5. Rheiner, M. 2014. Birdly an attempt to fly. In ACM SIGGRAPH 2014 Emerging Technologies, ACM, New York, NY, USA, SIGGRAPH ’14, 3:1–3:1. 

ACM Digital Library Publication:



Overview Page:


Type: