“An Automultiscopic Projector Array for Interactive Digital Humans” by Jones, Unger, Nagano, Busch, Yu, et al. …
Notice: Pod Template PHP code has been deprecated, please use WP Templates instead of embedding PHP. has been deprecated since Pods version 2.3 with no alternative available. in /data/siggraph/websites/history/wp-content/plugins/pods/includes/general.php on line 518
Conference:
- SIGGRAPH 2015
-
More from SIGGRAPH 2015:
Notice: Array to string conversion in /data/siggraph/websites/history/wp-content/plugins/siggraph-archive-plugin/src/next_previous/source.php on line 345
Notice: Array to string conversion in /data/siggraph/websites/history/wp-content/plugins/siggraph-archive-plugin/src/next_previous/source.php on line 345
Type(s):
Entry Number: 06
Title:
- An Automultiscopic Projector Array for Interactive Digital Humans
Presenter(s):
Description:
Automultiscopic 3D displays allow a large number of viewers to experience 3D content simultaneously without the hassle of special glasses or head gear. Our display uses a dense array of video projectors to generate many images with high-angular density over a wide-field of view. As each user moves around the display, their eyes smoothly transition from one view to the next. The display is ideal for displaying life-size human subjects as it allows for natural personal interactions with 3D cues such as eyegaze and spatial hand gestures. In this installation, we will explore ”time-offset” interactions with recorded 3D human subjects.
For each subject, we have recorded a large set of video statements, and users access these statements through natural conversation that mimics face-to-face interaction. Conversational reactions to user questions are retrieved through speech recognition and a statistical classifier that finds the best video response for a given question. Recordings of answers, listening and idle behaviors, are linked together to create a persistent visual image of the person throughout the interaction. While it is impossible to simulate all possible questions and answers, we are scaling our system to handle up to 10-20 hours of interviews that should make it possible to simulate spontaneous and usefully informative conversations. More details on our natural language engine can be found in [Artstein et al. 2014].
References:
ARTSTEIN, R., TRAUM, D., ALEXANDER, O., LEUSKI, A., JONES, A., GEORGILA, K., DEBEVEC, P., SWARTOUT, W., MAIO, H., AND SMITH, S. 2014. Time-offset interaction with a holocaust survivor. In Proceedings of the 19th International Conference on Intelligent User Interfaces, ACM, New York, NY, USA, IUI ’14, 163–168.
JONES, A., NAGANO, K., LIU, J., BUSCH, J., YU, X., BOLAS, M., AND DEBEVEC, P. 2014. Interpolating vertical parallax for an autostereoscopic 3d projector array.