“Immersive Real World through Deep Billboards” by Kondo, Kuroki, Hyakuta, Shane Gu and Ochiai

  • ©Naruya Kondo, So Kuroki, Ryosuke Hyakuta, Shixiang Shane Gu, and Yoichi Ochiai

Conference:


Entry Number: 08

Title:


    Immersive Real World through Deep Billboards

Program Title:


    Immersive Pavilion

Presenter(s):



Description:


    An aspirational goal for virtual reality (VR) is to bring in a rich diversity of real world objects losslessly. Existing VR applications often convert objects into explicit 3D models with meshes or point clouds, which allow fast interactive rendering but also severely limit its quality and the types of supported objects, fundamentally upper-bounding the “realism” of VR. Inspired by the classic “billboards” technique in gaming, we develop Deep Billboards that model 3D objects implicitly using neural networks, where only 2D image is rendered at a time based on the user’s viewing direction. Our system, connecting a commercial VR headset with a server running neural rendering, allows real-time high-resolution simulation of detailed rigid objects, hairy objects, actuated dynamic objects and more in an interactive VR world, drastically narrowing the existing real-to-simulation (real2sim) gap. Additionally, we augment Deep Billboards with physical interaction capability, adapting classic billboards from screen-based games to immersive VR. At our pavilion, the visitors can use our off-the-shelf setup for quickly capturing their favorite objects, and within minutes, experience them in an immersive and interactive VR world – with minimal loss of reality. Our project page: https://sites.google.com/view/deepbillboards/

References:


    1. Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. 2019. Learning Latent Dynamics for Planning from Pixels. In International Conference on Machine Learning. 2555–2565.
    2. Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV.
    3. Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, and Angjoo Kanazawa. 2021. PlenOctrees for Real-time Rendering of Neural Radiance Fields. In ICCV.

ACM Digital Library Publication:



Overview Page:


Type: