“Omnidirectional display that presents information to the ambient environment with optical transparency” by Nakamura, Miyazaki, Yoshida and Sato

  • ©Toshikatsu Nakamura, Ryusuke Miyazaki, Kouga Yoshida, and Toshiki Sato

  • ©Toshikatsu Nakamura, Ryusuke Miyazaki, Kouga Yoshida, and Toshiki Sato

  • ©Toshikatsu Nakamura, Ryusuke Miyazaki, Kouga Yoshida, and Toshiki Sato

  • ©Toshikatsu Nakamura, Ryusuke Miyazaki, Kouga Yoshida, and Toshiki Sato

Conference:


Type:


Entry Number: 06

Title:

    Omnidirectional display that presents information to the ambient environment with optical transparency

Presenter(s)/Author(s):



Abstract:


    Projection mapping is an information presentation method (called omnidirectional display) in which images are projected on the entire surface (all sides) of a three-dimensional structure (screen). Using this method, users can browse the projected images on the three-dimensional shape from various angles.

    Omnidirectional display has the advantage that multiple people can simultaneously browse the highly immersive images projected onto a real three-dimensional shape in front of them without the need for HMDs or similar devices. In addition, displays of various shapes, from simple box/sphere shapes to complex three-dimensional shapes, can be realized.

    In this study, we categorize omnidirectional displays into two different types depending on whether the user browses the three-dimensional shape on which the images are projected “from the outside” or “from the inside” (Fig.1).

    The former (A in Fig.1) can be directly held in the hand and browsed from various angles if it is small [Pla and Maes 2013], or surrounded by several users simultaneously if it is medium-sized [Beyer et al. 2011]. The latter (B in Fig.1) enables the user to enter the internal space of the screen, projected on the surrounding (walls/floor/ceiling of the room), and browse the images with a high level of immersion [Jones et al. 2014] [Jones et al. 2013].

    In both cases, the user cannot see the entire screen, but in the former case, the screen opposite the user is hidden, and in the latter case, the screen behind the user is hidden. In addition, different actions are required for the user to view the hidden surface. In the former case, a small display can be moved by grabbing it by hand or looking into it using the upper body, whereas a medium display can be browsed by walking around it. In the latter case, the user can walk around the display or move his/her head to look at it.

    Thus, there is a clear difference in the interaction elements of the two displays. Traditionally, they have been considered to be completely different displays.

    However, considering the user’s position, the only difference between A and B (Fig.1) is whether the user is outside or inside the omnidirectional screen. In addition, when the image can be projected from the center of A, as shown in Fig.2, the difference between A and B can be considered to be the presence of A.

    Therefore, we propose a new omnidirectional display that can dynamically control the omnidirectional projection from a single central point and the transparency of the screen material as well as a new omnidirectional display platform that integrates omnidirectional displays A and B on a single platform. This device not only allows the user to selectively use the interaction elements and features of A and B, but also allows the interaction elements of each system to be extended by the features of the other.


Acknowledgements:


    This work was supported by JSPS KAKENHI Grant Number JP21K- 11993.


PDF:



ACM Digital Library Publication:



Overview Page: