“Context-Aware 3D Gesture Recognition for Games and Virtual Reality” by LaViola

  • ©Joseph J. LaViola



Entry Number: 17


    Context-Aware 3D Gesture Recognition for Games and Virtual Reality

Course Organizer(s):



    Introductory computer graphics, and linear algebra.

    Who Should Attend
    Anyone interested in learning about how to design and develop 3D gesture recognizers and interfaces: researchers, game designers, scientists, developers, and hobbyists. 

    3D gestural interfaces provides a powerful and natural way to interact with computers using the hands and body for a variety of different applications including video games, training and simulation, and virtual and augmented reality. In addition, with advancements in commodity motion sensors and user tracking technology, this interaction paradigm is becoming more commonplace in everyday computing activities. However, accurate recognition of 3D gestures, so they can be used reliably in these applications, is still a challenging problem. This course explores how contextual information (about the user and the virtual environment) can be directly integrated into machine-learning algorithms to improve recognition speed and accuracy.

    The course begins with a briefy examination of the basic concepts of 3D gesture recognition including:

    • How to collect raw 3D gesture data from the user’s fingers, hands, and whole body with both passive and active tracking schemes.
    • Existing machine-learning algorithms commonly used in 3D gesture recognizers.

    The main part of the course presents methods for integrating contextual information into machine-learning algorithms such as simple linear classifiers, dynamic time warping, and support vector machines, and compares their performance with state-of-the-art machine-learning algorithms. The course concludes with a summary of important future research directions that are critical to improving 3D gestural interaction.  

Overview Page: