skip to main content
research-article

Character animation in two-player adversarial games

Published: 02 July 2010 Publication History

Abstract

The incorporation of randomness is critical for the believability and effectiveness of controllers for characters in competitive games. We present a fully automatic method for generating intelligent real-time controllers for characters in such a game. Our approach uses game theory to deal with the ramifications of the characters acting simultaneously, and generates controllers which employ both long-term planning and an intelligent use of randomness. Our results exhibit nuanced strategies based on unpredictability, such as feints and misdirection moves, which take into account and exploit the possible strategies of an adversary. The controllers are generated by examining the interaction between the rules of the game and the motions generated from a parametric motion graph. This involves solving a large-scale planning problem, so we also describe a new technique for scaling this process to higher dimensions.

Supplementary Material

wampler (wampler.mov)
1st supplemental movie file for Character animation in two-player adversarial games
MP4 File (tp053_11.mp4)

References

[1]
Graepel, T., Herbrich, R., and Gold, J. 2004. Learning to fight. In Proceedings of the International Conference on Computer Games: Artificial Intelligence, Design and Education.
[2]
Heck, R. and Gleicher, M. 2007. Parametric motion graphs. In Proceedings of Symposium on Interactive 3D Graphics and Games (I3D).
[3]
Ikemoto, L., Arikan, O., and Forsyth, D. 2005. Learning to move autonomously in a hostile environment. Tech. rep. UCB/CSD-5-1395, University of California at Berkeley.
[4]
Keller, P. W., Mannor, S., and Precup, D. 2006. Automatic basis function construction for approximate dynamic programming and reinforcement learning. In Proceedings of the 23rd International Conference on Machine Learning (ICML'06). ACM, New York, 449--456.
[5]
Lagoudakis, M. and Parr, R. 2002. Value function approximation in zero-sum markov games. In Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence (UAI'02).
[6]
Lau, M. and Kuffner, J. J. 2006. Precomputed search trees: Planning for interactive goal-driven animation. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation. 299--308.
[7]
Lee, J. and Lee, K. H. 2006. Precomputing avatar behavior from human motion data. Graph. Models 68, 2, 158--174.
[8]
Liu, C. K., Hertzmann, A., and Popović, Z. 2006. Composition of complex optimal multi-character motions. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA'06). 215--222.
[9]
Lo, W.-Y. and Zwicker, M. 2008. Real-Time planning for parameterized human motion. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA'08).
[10]
Mahadevan, S. 2006. Learning representation and control in continuous markov decision processes. In Proceedings of the 21st National Conference on Artificial Intelligence. AAAI Press.
[11]
McCann, J. and Pollard, N. 2007. Responsive characters from motion fragments. ACM Trans. Graph. 26, 3.
[12]
Morris, P. 1994. Introduction to Game Theory. Springer.
[13]
Paige, C. C. and Saunders, M. A. 1982. Lsqr: An algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Softw. 8, 1, 43--71.
[14]
Petrik, M. 2007. An analysis of laplacian methods for value function approximation in mdps. In Proceedings of the International Joint Conference on Artificial Intelligence. 2574--2579.
[15]
Petrosjan, L. and Zenkevich, N. 1996. Game Theory. World Scientific.
[16]
Sadovskii, A. L. 1978. A monotone iterative algorithm for solving matrix games. Soviet Math Rep. 238, 3, 538--540.
[17]
Shin, H. J. and Oh, H. S. 2006. Fat graphs: Constructing an interactive character with continuous controls. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA'06). Eurographics Association, 291--298.
[18]
Shum, H. P. H., Komura, T., Shiraishi, M., and Yamazaki, S. 2008. Interaction patches for multi-character animation. ACM Trans. Graph. 27, 5, 1--8.
[19]
Shum, H. P. H., Komura, T., and Yamazaki, S. 2007. Simulating competitive interactions using singly captured motions. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology (VRST'07). ACM, New York, 65--72.
[20]
Smart, W. D. 2004. Explicit manifold representations for value-function approximation in reinforcement learning. In Proceedings of the 8th International Symposium on Artificial Intelligence and Mathematics. 25--2004.
[21]
Sridhar, M. and Maggioni, M. 2007. Proto-Value functions: A laplacian framework for learning representation and control in markov decision processes. J. Mach. Learn. Res. 8, 2169--2231.
[22]
Sutton, R. S. and Barto, A. G. 1998. Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning). The MIT Press.
[23]
Treuille, A., Lee, Y., and Popović, Z. 2007. Near-optimal character animation with continuous control. ACM Trans. Graph. 26, 3, 7.
[24]
Uther, W. and Veloso, M. 1997. Adversarial reinforcement learning. Tech. rep. In Proceedings of the AAAI Fall Symposium on Model Directed Autonomous Systems.
[25]
Williams, R. and Baird, L. C. III. 1993. Tight performance bounds on greedy policies based on imperfect value functions. Tech. rep. NU-CCS-93-14. Department of Computer Science, Northeastern University. November.

Cited By

View all

Index Terms

  1. Character animation in two-player adversarial games

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Graphics
    ACM Transactions on Graphics  Volume 29, Issue 3
    June 2010
    104 pages
    ISSN:0730-0301
    EISSN:1557-7368
    DOI:10.1145/1805964
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 02 July 2010
    Accepted: 01 April 2010
    Received: 01 February 2010
    Published in TOG Volume 29, Issue 3

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Character animation
    2. game theory
    3. optimal control

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)21
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 11 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Strategy and Skill Learning for Physics-based Table Tennis AnimationACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657437(1-11)Online publication date: 13-Jul-2024
    • (2023)Neural Categorical Priors for Physics-Based Character ControlACM Transactions on Graphics10.1145/361839742:6(1-16)Online publication date: 5-Dec-2023
    • (2023)Acting as Inverse Inverse PlanningACM SIGGRAPH 2023 Conference Proceedings10.1145/3588432.3591510(1-12)Online publication date: 23-Jul-2023
    • (2023)Simulation and Retargeting of Complex Multi-Character InteractionsACM SIGGRAPH 2023 Conference Proceedings10.1145/3588432.3591491(1-11)Online publication date: 23-Jul-2023
    • (2023) Storytelling as Inverse Inverse Planning Topics in Cognitive Science10.1111/tops.1271016:1(54-70)Online publication date: 14-Nov-2023
    • (2022)Real-time Interactive Animation System for Low-Priced Motion Capture SensorsJournal of the Korea Computer Graphics Society10.15701/kcgs.2022.28.2.2928:2(29-41)Online publication date: 1-Jun-2022
    • (2022)Graph convolutional recurrent networks for reward shaping in reinforcement learningInformation Sciences10.1016/j.ins.2022.06.050608(63-80)Online publication date: Aug-2022
    • (2021)Control strategies for physically simulated characters performing two-player competitive sportsACM Transactions on Graphics10.1145/3450626.345976140:4(1-11)Online publication date: 19-Jul-2021
    • (2021)Multi-agent reinforcement learning for character controlThe Visual Computer: International Journal of Computer Graphics10.1007/s00371-021-02269-137:12(3115-3123)Online publication date: 1-Dec-2021
    • (2021)Interactive multi‐character motion retargetingComputer Animation and Virtual Worlds10.1002/cav.201532:3-4Online publication date: 25-May-2021
    • Show More Cited By

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media