“Learning to Schedule Control Fragments for Physics-Based Characters Using Deep Q-Learning” by Liu and Hodgins

  • ©Libin Liu and Jessica K. Hodgins

Conference:


Type:


Title:

    Learning to Schedule Control Fragments for Physics-Based Characters Using Deep Q-Learning

Session/Category Title: Learning to Move


Presenter(s)/Author(s):


Moderator(s):



Abstract:


    Given a robust control system, physical simulation offers the potential for interactive human characters that move in realistic and responsive ways. In this article, we describe how to learn a scheduling scheme that reorders short control fragments as necessary at runtime to create a control system that can respond to disturbances and allows steering and other user interactions. These schedulers provide robust control of a wide range of highly dynamic behaviors, including walking on a ball, balancing on a bongo board, skateboarding, running, push-recovery, and breakdancing. We show that moderate-sized Q-networks can model the schedulers for these control tasks effectively and that those schedulers can be efficiently learned by the deep Q-learning algorithm.

References:


    1. Yeuhi Abe and Jovan Popovíc. 2011. Simulating 2D gaits with a phase-indexed tracking controller. IEEE Comput. Graph. Appl. 31, 4 (July 2011), 22–33. Google ScholarDigital Library
    2. Mazen Al Borno, Martin de Lasa, and Aaron Hertzmann. 2013. Trajectory optimization for full-body movements with complex contacts. IEEE Trans. Visual. Comput. Graph. 19, 8 (Aug 2013), 1405–1414. Google ScholarDigital Library
    3. Mazen Al Borno, Eugene Fiume, Aaron. Hertzmann, and Martin de Lasa. 2014. Feedback control for rotational movements in feature space. Comput. Graph. Forum 33, 2 (May 2014), 225–233. Google ScholarDigital Library
    4. Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. 2012. Theano: New features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop.Google Scholar
    5. James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. 2010. Theano: A CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy). Oral Presentation.Google Scholar
    6. Brian G. Buss, Alireza Ramezani, Kaveh Akbari Hamed, Brent A. Griffin, Kevin S. Galloway, and Jessy W. Grizzle. 2014. Preliminary walking experiments with underactuated 3D bipedal robot MARLO. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’14). 2529–2536.Google Scholar
    7. Seth Cooper, Aaron Hertzmann, and Zoran Popović. 2007. Active learning for real-time motion controllers. ACM Trans. Graph. 26, 3 (July 2007), Article 5. Google ScholarDigital Library
    8. Stelian Coros, Philippe Beaudoin, and Michiel van de Panne. 2009. Robust task-based control policies for physics-based characters. ACM Trans. Graph. 28, 5 (Dec. 2009), Article 170, 170:1–170:9. Google ScholarDigital Library
    9. Stelian Coros, Philippe Beaudoin, and Michiel van de Panne. 2010. Generalized biped walking control. ACM Trans. Graph. 29, 4 (July 2010), Article 130, 130:1–130:9 pages. Google ScholarDigital Library
    10. Marco da Silva, Frédo Durand, and Jovan Popović. 2009. Linear Bellman combination for control of character animation. ACM Trans. Graph. 28, 3 (July 2009), Article 82, 82:1–82:10. Google ScholarDigital Library
    11. Thomas Geijtenbeek and Nicolas Pronost. 2012. Interactive character animation using simulated physics: A state-of-the-art review. Comput. Graph. Forum 31, 8 (Dec. 2012), 2492–2515. Google ScholarDigital Library
    12. Gaël Guennebaud, Benoît Jacob, and others. 2010. Eigen v3. Retrieved from http://eigen.tuxfamily.org.Google Scholar
    13. Sehoon Ha and C. Karen Liu. 2014. Iterative training of dynamic skills inspired by human coaching techniques. ACM Trans. Graph. 34, 1, Article 1 (Dec. 2014), 1:1–1:11. Google ScholarDigital Library
    14. Perttu Hämäläinen, Joose Rajamäki, and C. Karen Liu. 2015. Online control of simulated humanoids using particle belief propagation. ACM Trans. Graph. 34, 4, Article 81 (July 2015), 81:1–81:13. Google ScholarDigital Library
    15. Sumit Jain and C. Karen Liu. 2011. Controlling physics-based characters using soft contacts. ACM Trans. Graph. 30, 6 (Dec. 2011), Article 163, 163:1–163:10. Google ScholarDigital Library
    16. Taesoo Kwon and Jessica Hodgins. 2010. Control systems for human running using an inverted pendulum model and a reference motion capture sequence. In Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA’10). 129–138. Google ScholarDigital Library
    17. Taesoo Kwon and Jessica K. Hodgins. 2017. Momentum-mapped inverted pendulum models for controlling dynamic human motions. ACM Trans. Graph. 36, 1, Article 10 (Jan. 2017), 14 pages. Google ScholarDigital Library
    18. Yoonsang Lee, Sungeun Kim, and Jehee Lee. 2010a. Data-driven biped control. ACM Trans. Graph. 29, 4, Article 129 (July 2010), 129:1–129:8. Google ScholarDigital Library
    19. Yongjoon Lee, Kevin Wampler, Gilbert Bernstein, Jovan Popović, and Zoran Popović. 2010b. Motion fields for interactive character locomotion. ACM Trans. Graph. 29, 6, Article 138 (Dec. 2010), 138:1–138:8. Google ScholarDigital Library
    20. Sergey Levine and Vladlen Koltun. 2013. Guided policy search. In Proceedings of the 30th International Conference on Machine Learning (ICML’13). Google ScholarDigital Library
    21. Sergey Levine and Vladlen Koltun. 2014. Learning complex neural network policies with trajectory optimization. In Proceedings of the 31st International Conference on Machine Learning (ICML’14). Google ScholarDigital Library
    22. Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2015. Continuous control with deep reinforcement learning. CoRR abs/1509.02971 (2015). http://arxiv.org/abs/1509.02971Google Scholar
    23. Chenggang Liu, Christopher G. Atkeson, and Jianbo Su. 2013. Biped walking control using a trajectory library. Robotica 31, 2 (March 2013), 311–322. Google ScholarDigital Library
    24. Libin Liu, Michiel van de Panne, and KangKang Yin. 2016. Guided learning of control graphs for physics-based characters. ACM Trans. Graph. 35, 3 (May 2016), Article 29, 29:1–29:14. Google ScholarDigital Library
    25. Libin Liu, KangKang Yin, and Baining Guo. 2015. Improving sampling-based motion control. Comput. Graph. Forum 34, 2 (2015), 415–423. Google ScholarDigital Library
    26. Libin Liu, KangKang Yin, Michiel van de Panne, Tianjia Shao, and Weiwei Xu. 2010. Sampling-based contact-rich motion control. ACM Trans. Graph. 29, 4 (2010), Article 128. Google ScholarDigital Library
    27. Libin Liu, KangKang Yin, Bin Wang, and Baining Guo. 2013. Simulation and control of skeleton-driven soft body characters. ACM Trans. Graph. 32, 6 (2013), Article 215. Google ScholarDigital Library
    28. Adriano Macchietto, Victor Zordan, and Christian R. Shelton. 2009. Momentum control for balance. ACM Trans. Graph. 28, 3 (2009). Google ScholarDigital Library
    29. James McCann and Nancy Pollard. 2007. Responsive characters from motion fragments. ACM Trans. Graph. 26, 3 (July 2007), Article 6. Google ScholarDigital Library
    30. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 (26 Feb 2015), 529–533.Google Scholar
    31. Igor Mordatch, Martin de Lasa, and Aaron Hertzmann. 2010. Robust physics-based locomotion using low-dimensional planning. ACM Trans. Graph. 29, 4 (July 2010), Article 71, 71:1–71:8. Google ScholarDigital Library
    32. Igor Mordatch, Kendall Lowrey, Galen Andrew, Zoran Popovic, and Emanuel V. Todorov. 2015. Interactive control of diverse complex characters with neural networks. In Advances in Neural Information Processing Systems 28. Curran Associates, Inc., 3114–3122. Google ScholarDigital Library
    33. Igor Mordatch, Emanuel Todorov, and Zoran Popović. 2012. Discovery of complex behaviors through contact-invariant optimization. ACM Trans. Graph. 31, 4 (July 2012), Article 43, 43:1–43:8. Google ScholarDigital Library
    34. Uldarico Muico, Yongjoon Lee, Jovan Popović, and Zoran Popović. 2009. Contact-aware nonlinear control of dynamic characters. ACM Trans. Graph. 28, 3 (2009). Google ScholarDigital Library
    35. Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, Shane Legg, Volodymyr Mnih, Koray Kavukcuoglu, and David Silver. 2015. Massively parallel methods for deep reinforcement learning. In Deep Learning Workshop, International Conference on Machine Learning (ICML’15).Google Scholar
    36. Jun Nakanishi, Jun Morimoto, Gen Endo, Gordon Cheng, Stefan Schaal, and Mitsuo Kawato. 2004. Learning from demonstration and adaptation of biped locomotion. Robot. Auton. Syst. 47, 23 (2004), 79–91.Google ScholarCross Ref
    37. Xue Bin Peng, Glen Berseth, and Michiel van de Panne. 2015. Dynamic terrain traversal skills using reinforcement learning. ACM Trans. Graph. 34, 4, Article 80 (July 2015), 80:1–80:11. Google ScholarDigital Library
    38. Xue Bin Peng, Glen Berseth, and Michiel van de Panne. 2016. Terrain-adaptive locomotion skills using deep reinforcement learning. ACM Trans. Graph. 35, 4 (July 2016). Google ScholarDigital Library
    39. Kwang Won Sok, Manmyung Kim, and Jehee Lee. 2007. Simulating biped behaviors from human motion data. ACM Trans. Graph. 26, 3 (2007), Article 107. Google ScholarDigital Library
    40. Freek Stulp, Evangelos A. Theodorou, and Stefan Schaal. 2012. Reinforcement learning with sequences of motion primitives for robust manipulation. IEEE Trans. Robot. 28, 6 (Dec 2012), 1360–1370. Google ScholarDigital Library
    41. Richard S. Sutton, Doina Precup, and Satinder P. Singh. 1998. Intra-option learning about temporally abstract actions. In Proceedings of the 15th International Conference on Machine Learning (ICML’98). 556–564. Google ScholarDigital Library
    42. Jie Tan, Yuting Gu, C. Karen Liu, and Greg Turk. 2014. Learning bicycle stunts. ACM Trans. Graph. 33, 4 (July 2014), Article 50, 50:1–50:12. Google ScholarDigital Library
    43. Jie Tan, C. Karen Liu, and Greg Turk. 2011. Stable proportional-derivative controllers. IEEE Comput. Graph. Appl. 31, 4 (2011), 34–44. Google ScholarDigital Library
    44. Tijmen Tieleman and Geoff Hinton. 2012. Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning.Google Scholar
    45. Adrien Treuille, Yongjoon Lee, and Zoran Popović. 2007. Near-optimal character animation with continuous control. ACM Trans. Graph. 26, 3 (July 2007), Article 7. Google ScholarDigital Library
    46. Hado van Hasselt, Arthur Guez, and David Silver. 2015. Deep reinforcement learning with double Q-learning. CoRR abs/1509.06461 (2015). http://arxiv.org/abs/1509.06461 Google ScholarDigital Library
    47. Kevin Wampler and Zoran Popović. 2009. Optimal gait and form for animal locomotion. ACM Trans. Graph. 28, 3 (2009), Article 60. Google ScholarDigital Library
    48. Jack M. Wang, David J. Fleet, and Aaron Hertzmann. 2010. Optimizing walking controllers for uncertain inputs and environments. ACM Trans. Graph. 29, 4, Article 73 (July 2010), 73:1–73:8. Google ScholarDigital Library
    49. Christopher John Cornish Hellaby Watkins. 1989. Learning from Delayed Rewards. Ph.D. dissertation. King’s College, Cambridge, UK. http://www.cs.rhul.ac.uk/chrisw/new_thesis.pdf.Google Scholar
    50. Yuting Ye and C. Karen Liu. 2010. Optimal feedback control for character animation using an abstract model. ACM Trans. Graph. 29, 4, Article 74 (July 2010), 74:1–74:9. Google ScholarDigital Library
    51. KangKang Yin, Kevin Loken, and Michiel van de Panne. 2007. SIMBICON: Simple biped locomotion control. ACM Trans. Graph. 26, 3 (2007), Article 105. Google ScholarDigital Library
    52. Victor Zordan, David Brown, Adriano Macchietto, and KangKang Yin. 2014. Control of rotational dynamics for ground and aerial behavior. IEEE Trans. Visual. Comput. Graph. 20, 10 (Oct. 2014), 1356–1366.Google ScholarCross Ref


ACM Digital Library Publication: