“Character Animation in Two-Player Adversarial Games” by Wampler, Andersen, Herbst, Lee and Popovic

  • ©Kevin Wampler, Erik Andersen, Evan Herbst, Yongjoon Lee, and Zoran Popovic




    Character Animation in Two-Player Adversarial Games



    The incorporation of randomness is critical for the believability and effectiveness of controllers for characters in competitive games. We present a fully automatic method for generating intelligent real-time controllers for characters in such a game. Our approach uses game theory to deal with the ramifications of the characters acting simultaneously, and generates controllers which employ both long-term planning and an intelligent use of randomness. Our results exhibit nuanced strategies based on unpredictability, such as feints and misdirection moves, which take into account and exploit the possible strategies of an adversary. The controllers are generated by examining the interaction between the rules of the game and the motions generated from a parametric motion graph. This involves solving a large-scale planning problem, so we also describe a new technique for scaling this process to higher dimensions.


    1. Graepel, T., Herbrich, R., and Gold, J. 2004. Learning to fight. In Proceedings of the International Conference on Computer Games: Artificial Intelligence, Design and Education.
    2. Heck, R. and Gleicher, M. 2007. Parametric motion graphs. In Proceedings of Symposium on Interactive 3D Graphics and Games (I3D).
    3. Ikemoto, L., Arikan, O., and Forsyth, D. 2005. Learning to move autonomously in a hostile environment. Tech. rep. UCB/CSD-5-1395, University of California at Berkeley.
    4. Keller, P. W., Mannor, S., and Precup, D. 2006. Automatic basis function construction for approximate dynamic programming and reinforcement learning. In Proceedings of the 23rd International Conference on Machine Learning (ICML’06). ACM, New York, 449–456. 
    5. Lagoudakis, M. and Parr, R. 2002. Value function approximation in zero-sum markov games. In Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence (UAI’02).
    6. Lau, M. and Kuffner, J. J. 2006. Precomputed search trees: Planning for interactive goal-driven animation. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation. 299–308.
    7. Lee, J. and Lee, K. H. 2006. Precomputing avatar behavior from human motion data. Graph. Models 68, 2, 158–174.
    8. Liu, C. K., Hertzmann, A., and Popović, Z. 2006. Composition of complex optimal multi-character motions. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA’06). 215–222.
    9. Lo, W.-Y. and Zwicker, M. 2008. Real-Time planning for parameterized human motion. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA’08). 
    10. Mahadevan, S. 2006. Learning representation and control in continuous markov decision processes. In Proceedings of the 21st National Conference on Artificial Intelligence. AAAI Press. 
    11. McCann, J. and Pollard, N. 2007. Responsive characters from motion fragments. ACM Trans. Graph. 26, 3. 
    12. Morris, P. 1994. Introduction to Game Theory. Springer.
    13. Paige, C. C. and Saunders, M. A. 1982. Lsqr: An algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Softw. 8, 1, 43–71. 
    14. Petrik, M. 2007. An analysis of laplacian methods for value function approximation in mdps. In Proceedings of the International Joint Conference on Artificial Intelligence. 2574–2579. 
    15. Petrosjan, L. and Zenkevich, N. 1996. Game Theory. World Scientific.
    16. Sadovskii, A. L. 1978. A monotone iterative algorithm for solving matrix games. Soviet Math Rep. 238, 3, 538–540.
    17. Shin, H. J. and Oh, H. S. 2006. Fat graphs: Constructing an interactive character with continuous controls. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA’06). Eurographics Association, 291–298.
    18. Shum, H. P. H., Komura, T., Shiraishi, M., and Yamazaki, S. 2008. Interaction patches for multi-character animation. ACM Trans. Graph. 27, 5, 1–8.
    19. Shum, H. P. H., Komura, T., and Yamazaki, S. 2007. Simulating competitive interactions using singly captured motions. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology (VRST’07). ACM, New York, 65–72.
    20. Smart, W. D. 2004. Explicit manifold representations for value-function approximation in reinforcement learning. In Proceedings of the 8th International Symposium on Artificial Intelligence and Mathematics. 25–2004.
    21. Sridhar, M. and Maggioni, M. 2007. Proto-Value functions: A laplacian framework for learning representation and control in markov decision processes. J. Mach. Learn. Res. 8, 2169–2231. 
    22. Sutton, R. S. and Barto, A. G. 1998. Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning). The MIT Press.
    23. Treuille, A., Lee, Y., and Popović, Z. 2007. Near-optimal character animation with continuous control. ACM Trans. Graph. 26, 3, 7. 
    24. Uther, W. and Veloso, M. 1997. Adversarial reinforcement learning. Tech. rep. In Proceedings of the AAAI Fall Symposium on Model Directed Autonomous Systems.
    25. Williams, R. and Baird, L. C. III. 1993. Tight performance bounds on greedy policies based on imperfect value functions. Tech. rep. NU-CCS-93-14. Department of Computer Science, Northeastern University. November.

ACM Digital Library Publication:

Overview Page: