“Modular dynamic response from motion databases” by Mallios, Mehta, Street and Jenkins

  • ©Jason Mallios, Neil Mehta, Chipalo Street, and Odest Jenkins

  • ©Jason Mallios, Neil Mehta, Chipalo Street, and Odest Jenkins




    Modular dynamic response from motion databases



    Animation of humanoid figures is a significant component in many current applications (e.g., video games, movie-making). However, the process of creating viable animations can be tedious, time consuming, and expensive due to the complexity of controlling a character through a large number of degrees-of-freedom. Such difficulties can be further compounded when the character is subject to external force, as often the case in video games. Recent dynamic response methods by Zordan et al. [Zordan et al. ] and Mandel [Mandel ] go beyond limp passive “ragdoll” animation by transitioning between motion capture playback and controlled physical simulation. When an unanticipated external force is applied to the character, a search is performed in the motion capture database to find the closest matching pose to the character’s current configuration. The found pose serves as a desired configuration for the character to servo towards and transition back into motion capture playback. Current dynamic response methods use a monolithic motion database that: 1) requires a significant computation burden for search (70% of computation time for Zordan et al.) and 2) does not readily incorporate user input. We address both of these limitations by using a modular collection of motion databases each representing some action (e.g., run, punch, kick). Through modularity, we invoke smaller independent search procedures on each database, where the choice of desired pose is informed by the action represented by each database. Eventually, we envision dense motion databases constructed from learned parameterized models [Jenkins and Mataric ; Kovar and Gleicher ]. ́


    1. Jenkins, O. C., and Matarić, M. J. Performance-derived behavior vocabularies: Data-driven acqusition of skills from motion.
    2. Kovar, L., and Gleicher, M. Automated extraction and parameterization of motions in large data sets.
    3. Mandel, M. Versatile and interactive virtual humans: Hybrid use of data-driven and dynamics-based motion synthesis.
    4. Zordan, V. B., Majkowska, A., Chiu, B., and Fast, M. Dynamic Response for Motion Capture Animation.

Additional Images:

©Jason Mallios, Neil Mehta, Chipalo Street, and Odest Jenkins

ACM Digital Library Publication:

Overview Page: