“Neural animation layering for synthesizing martial arts movements” by Starke, Zhao, Zinno and Komura

  • ©Sebastian Starke, Yiwei Zhao, Fabio Zinno, and Taku Komura

Conference:


Type:


Title:

    Neural animation layering for synthesizing martial arts movements

Presenter(s)/Author(s):



Abstract:


    Interactively synthesizing novel combinations and variations of character movements from different motion skills is a key problem in computer animation. In this paper, we propose a deep learning framework to produce a large variety of martial arts movements in a controllable manner from raw motion capture data. Our method imitates animation layering using neural networks with the aim to overcome typical challenges when mixing, blending and editing movements from unaligned motion sources. The framework can synthesize novel movements from given reference motions and simple user controls, and generate unseen sequences of locomotion, punching, kicking, avoiding and combinations thereof, but also reconstruct signature motions of different fighters, as well as close-character interactions such as clinching and carrying by learning the spatial joint relationships. To achieve this goal, we adopt a modular framework which is composed of the motion generator and a set of different control modules. The motion generator functions as a motion manifold that projects novel mixed/edited trajectories to natural full-body motions, and synthesizes realistic transitions between different motions. The control modules are task dependent and can be developed and trained separately by engineers to include novel motion tasks, which greatly reduces network iteration time when working with large-scale datasets. Our modular framework provides a transparent control interface for animators that allows modifying or combining movements after network training, and enables iterative adding of control modules for different motion tasks and behaviors. Our system can be used for offline and online motion generation alike, and is relevant for real-time applications such as computer games.

References:


    1. Kfir Aberman, Yijia Weng, Dani Lischinski, Daniel Cohen-Or, and Baoquan Chen. 2020. Unpaired motion style transfer from video to animation. ACM Trans on Graph 39, 4 (2020), 64–1. Google ScholarDigital Library
    2. Rami Ali Al-Asqhar, Taku Komura, and Myung Geol Choi. 2013. Relationship descriptors for interactive motion adaptation. In Proc. SCA. 45–53. Google ScholarDigital Library
    3. Kevin Bergamin, Simon Clavet, Daniel Holden, and James Richard Forbes. 2019. DReCon: data-driven responsive control of physics-based characters. ACM Trans on Graph 38, 6 (2019), 1–11. Google ScholarDigital Library
    4. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. Openai gym. arXiv preprint arXiv:1606.01540 (2016). https://arxiv.org/abs/1606.01540Google Scholar
    5. Simon Clavet. 2016. Motion matching and the road to next-gen animation. In Proc. of GDC.Google Scholar
    6. Stelian Coros, Philippe Beaudoin, and Michiel Van de Panne. 2010. Generalized biped walking control. ACM Trans on Graph 29, 4 (2010), 130. Google ScholarDigital Library
    7. Martin De Lasa, Igor Mordatch, and Aaron Hertzmann. 2010. Feature-based locomotion controllers. ACM Trans on Graph 29, 4 (2010), 1–10. Google ScholarDigital Library
    8. Mira Dontcheva, Gary Yngve, and Zoran Popović. 2003. Layered acting for character animation. ACM Transactions on Graphics (TOG) 22, 3 (2003), 409–416. Google ScholarDigital Library
    9. Katerina Fragkiadaki, Sergey Levine, Panna Felsen, and Jitendra Malik. 2015. Recurrent network models for human dynamics. In Proc. ICCV. 4346–4354. Google ScholarDigital Library
    10. Thomas Geijtenbeek, Michiel Van De Panne, and A Frank Van Der Stappen. 2013. Flexible muscle-based locomotion for bipedal creatures. ACM Trans on Graph 32, 6 (2013), 1–11. Google ScholarDigital Library
    11. Michael Gleicher. 1997. Motion editing with spacetime constraints. In Proceedings of the 1997 symposium on Interactive 3D graphics. 139–ff. Google ScholarDigital Library
    12. Michael Gleicher. 1998. Retargetting motion to new characters. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques. 33–42. Google ScholarDigital Library
    13. Keith Grochow, Steven L Martin, Aaron Hertzmann, and Zoran Popović. 2004. Style-based inverse kinematics. ACM Transactions on Graphics (TOG) 23, 3 (2004), 522–531. Google ScholarDigital Library
    14. Félix G Harvey, Mike Yurick, Derek Nowrouzezahrai, and Christopher Pal. 2020. Robust motion in-betweening. ACM Trans on Graph 39, 4 (2020), 60–1. Google ScholarDigital Library
    15. Rachel Heck and Michael Gleicher. 2007. Parametric motion graphs. In Proc. I3D. 129–136. Google ScholarDigital Library
    16. Rachel Heck, Lucas Kovar, and Michael Gleicher. 2006. Splicing upper-body actions with locomotion. In Computer Graphics Forum, Vol. 25. Wiley Online Library, 459–466. Google ScholarCross Ref
    17. Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, SM Eslami, et al. 2017. Emergence of locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286 (2017). https://arxiv.org/abs/1707.02286Google Scholar
    18. Gustav Eje Henter, Simon Alexanderson, and Jonas Beskow. 2020. Moglow: Probabilistic and controllable motion synthesis using normalising flows. ACM Trans on Graph 39, 6 (2020), 1–14. Google ScholarDigital Library
    19. Jessica K Hodgins, Wayne L Wooten, David C Brogan, and James F O’Brien. 1995. Animating human athletics. In Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques. 71–78. Google ScholarDigital Library
    20. Daniel Holden, Oussama Kanoun, Maksym Perepichka, and Tiberiu Popa. 2020. Learned motion matching. ACM Trans on Graph 39, 4 (2020), 53–1. Google ScholarDigital Library
    21. Daniel Holden, Taku Komura, and Jun Saito. 2017. Phase-functioned neural networks for character control. ACM Trans on Graph 36, 4 (2017), 42. Google ScholarDigital Library
    22. Daniel Holden, Jun Saito, and Taku Komura. 2016. A deep learning framework for character motion synthesis and editing. ACM Trans on Graph 35, 4 (2016). Google ScholarDigital Library
    23. Daniel Holden, Jun Saito, Taku Komura, and Thomas Joyce. 2015. Learning motion manifolds with convolutional autoencoders. In SIGGRAPH Asia 2015 Technical Briefs. ACM, 18. Google ScholarDigital Library
    24. Yazhou Huang and Marcelo Kallmann. 2010. Motion parameterization with inverse blending. In International Conference on Motion in Games. Springer, 242–253. Google ScholarCross Ref
    25. Leslie Ikemoto, Okan Arikan, and David Forsyth. 2009. Generalizing motion edits with gaussian processes. ACM Trans on Graph 28, 1 (2009), 1–12. Google ScholarDigital Library
    26. Yifeng Jiang, Tom Van Wouwe, Friedl De Groote, and C Karen Liu. 2019. Synthesis of biologically realistic human motion using joint torque actuation. ACM Trans on Graph 38, 4 (2019), 1–12. Google ScholarDigital Library
    27. Lucas Kovar and Michael Gleicher. 2003. Flexible automatic motion blending with registration curves. In Symposium on Computer Animation, Vol. 2. San Diego, CA, USA. Google ScholarDigital Library
    28. Lucas Kovar, Michael Gleicher, and Frédéric Pighin. 2002. Motion graphs. ACM Trans on Graph 21, 3 (2002), 473–482. Google ScholarDigital Library
    29. Jehee Lee and Kang Hoon Lee. 2006. Precomputing avatar behavior from human motion data. Graphical Models 68, 2 (2006), 158–174. Google ScholarDigital Library
    30. Kyungho Lee, Seyoung Lee, and Jehee Lee. 2018. Interactive character animation by learning multi-objective control. ACM Trans on Graph 37, 6 (2018), 1–10. Google ScholarDigital Library
    31. Seunghwan Lee, Moonseok Park, Kyoungmin Lee, and Jehee Lee. 2019. Scalable muscle-actuated human simulation and control. ACM Trans on Graph 38, 4 (2019), 1–13. Google ScholarDigital Library
    32. Yoonsang Lee, Sungeun Kim, and Jehee Lee. 2010a. Data-driven biped control. ACM Trans on Graph 29, 4 (2010), 1–8. Google ScholarDigital Library
    33. Yongjoon Lee, Kevin Wampler, Gilbert Bernstein, Jovan Popović, and Zoran Popović. 2010b. Motion fields for interactive character locomotion. ACM Transactions on Graphics (TOG) 29, 6 (2010), 1–8. Google ScholarDigital Library
    34. Sergey Levine, Jack M Wang, Alexis Haraux, Zoran Popović, and Vladlen Koltun. 2012. Continuous character control with low-dimensional embeddings. ACM Trans on Graph 31, 4 (2012), 1–10. Google ScholarDigital Library
    35. Zimo Li, Yi Zhou, Shuangjiu Xiao, Chong He, Zeng Huang, and Hao Li. 2017. Auto-conditioned recurrent networks for extended complex human motion synthesis. arXiv preprint arXiv:1707.05363 (2017). http://arxiv.org/abs/1707.05363Google Scholar
    36. Hung Yu Ling, Fabio Zinno, George Cheng, and Michiel Van De Panne. 2020. Character controllers using motion vaes. ACM Trans on Graph 39, 4 (2020), 40–1. Google ScholarDigital Library
    37. C Karen Liu, Aaron Hertzmann, and Zoran Popović. 2005. Learning physics-based motion style with nonlinear inverse optimization. In ACM Trans on Graph, Vol. 24. ACM, 1071–1081. Google ScholarDigital Library
    38. C Karen Liu and Zoran Popović. 2002. Synthesis of complex dynamic character motion from simple animations. ACM Trans on Graph 21, 3 (2002), 408–416. Google ScholarDigital Library
    39. Jianyuan Min and Jinxiang Chai. 2012. Motion graphs++: a compact generative model for semantic motion analysis and synthesis. ACM Trans on Graph 31, 6 (2012), 153. Google ScholarDigital Library
    40. Mark Mizuguchi, John Buchanan, and Tom Calvert. 2001. Data driven motion transitions for interactive games. In Eurographics 2001 Short Presentations, Vol. 2. 6. Google ScholarCross Ref
    41. Tomohiko Mukai and Shigeru Kuriyama. 2005. Geostatistical motion interpolation. ACM Trans on Graph 24, 3 (2005). Google ScholarDigital Library
    42. Soohwan Park, Hoseok Ryu, Seyoung Lee, Sunmin Lee, and Jehee Lee. 2019. Learning predict-and-simulate policies from unorganized human motion data. ACM Trans on Graph 38, 6 (2019), 1–11.Google ScholarDigital Library
    43. Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel van de Panne. 2018. Deepmimic: Example-guided deep reinforcement learning of physics-based character skills. ACM Trans on Graph 37, 4 (2018), 1–14. Google ScholarDigital Library
    44. Xue Bin Peng, Glen Berseth, KangKang Yin, and Michiel Van De Panne. 2017. Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning. ACM Trans on Graph 36, 4 (2017), 1–13. Google ScholarDigital Library
    45. Charles Rose, Michael F Cohen, and Bobby Bodenheimer. 1998. Verbs and adverbs: Multidimensional motion interpolation. IEEE Computer Graphics and Applications 18, 5 (1998), 32–40. Google ScholarDigital Library
    46. Charles F Rose III, Peter-Pike J Sloan, and Michael F Cohen. 2001. Artist-Directed Inverse-Kinematics Using Radial Basis Function Interpolation. Computer Graphics Forum 20, 3 (2001), 239–250. Google ScholarCross Ref
    47. Alla Safonova and Jessica K Hodgins. 2007. Construction and optimal search of interpolated motion graphs. ACM Trans on Graph 26, 3 (2007). Google ScholarDigital Library
    48. Yeongho Seol, Carol O’Sullivan, and Jehee Lee. 2013. Creature features: online motion puppetry for non-human characters. In Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation. 213–221. Google ScholarDigital Library
    49. Hyun Joon Shin and Hyun Seok Oh. 2006. Fat graphs: constructing an interactive character with continuous controls. In Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation. Eurographics Association, 291–298. Google ScholarCross Ref
    50. Hubert PH Shum, Taku Komura, Masashi Shiraishi, and Shuntaro Yamazaki. 2008. Interaction patches for multi-character animation. ACM Trans on Graph 27, 5 (2008). Google ScholarDigital Library
    51. Sebastian Starke, He Zhang, Taku Komura, and Jun Saito. 2019. Neural state machine for character-scene interactions. ACM Trans on Graph 38, 6 (2019), 209. Google ScholarDigital Library
    52. Sebastian Starke, Yiwei Zhao, Taku Komura, and Kazi Zaman. 2020. Local motion phases for learning multi-contact character movements. ACM Trans on Graph 39, 4 (2020), 54–1. Google ScholarDigital Library
    53. Douglas J Wiley and James K Hahn. 1997. Interpolation synthesis of articulated figure motion. IEEE Computer Graphics and Applications 17, 6 (1997), 39–45. Google ScholarDigital Library
    54. Andrew Witkin and Zoran Popovic. 1995. Motion warping. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques. 105–108. Google ScholarDigital Library
    55. Jungdam Won and Jehee Lee. 2019. Learning body shape variation in physics-based characters. ACM Trans on Graph 38, 6 (2019), 1–12. Google ScholarDigital Library
    56. KangKang Yin, Kevin Loken, and Michiel Van de Panne. 2007. Simbicon: Simple biped locomotion control. ACM Trans on Graph 26, 3 (2007), 105. Google ScholarDigital Library
    57. Wenhao Yu, Greg Turk, and C Karen Liu. 2018. Learning symmetric and low-energy locomotion. ACM Trans on Graph 37, 4 (2018), 1–12. Google ScholarDigital Library
    58. He Zhang, Sebastian Starke, Taku Komura, and Jun Saito. 2018. Mode-adaptive neural networks for quadruped motion control. ACM Trans on Graph 37, 4 (2018). Google ScholarDigital Library


ACM Digital Library Publication:



Overview Page: