skip to main content
research-article

Learned motion matching

Published: 12 August 2020 Publication History

Abstract

In this paper we present a learned alternative to the Motion Matching algorithm which retains the positive properties of Motion Matching but additionally achieves the scalability of neural-network-based generative models. Although neural-network-based generative models for character animation are capable of learning expressive, compact controllers from vast amounts of animation data, methods such as Motion Matching still remain a popular choice in the games industry due to their flexibility, predictability, low preprocessing time, and visual quality - all properties which can sometimes be difficult to achieve with neural-network-based methods. Yet, unlike neural networks, the memory usage of such methods generally scales linearly with the amount of data used, resulting in a constant trade-off between the diversity of animation which can be produced and real world production budgets. In this work we combine the benefits of both approaches and, by breaking down the Motion Matching algorithm into its individual steps, show how learned, scalable alternatives can be used to replace each operation in turn. Our final model has no need to store animation data or additional matching meta-data in memory, meaning it scales as well as existing generative models. At the same time, we preserve the behavior of Motion Matching, retaining the quality, control, and quick iteration time which are so important in the industry.

Supplemental Material

MP4 File
Presentation video
Transcript for: Presentation video
MP4 File
ZIP File
Supplemental files.

References

[1]
Okan Arikan and D. A. Forsyth. 2002. Interactive Motion Generation from Examples. In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques (San Antonio, Texas) (SIGGRAPH '02). ACM, New York, NY, USA, 483--490.
[2]
S. Behnel, R. Bradshaw, C. Citro, L. Dalcin, D.S. Seljebotn, and K. Smith. 2011. Cython: The Best of Both Worlds. Computing in Science Engineering 13, 2 (2011), 31 -- 39.
[3]
Kevin Bergamin, Simon Clavet, Daniel Holden, and James Richard Forbes. 2019. DReCon: Data-driven Responsive Control of Physics-based Characters. ACM Trans. Graph. 38, 6, Article 206 (Nov. 2019), 11 pages.
[4]
David Bollo. 2016. Inertialization: High-Performance Animation Transitions in 'Gears of War'. In Proc. of GDC 2018.
[5]
David Bollo. 2017. High Performance Animation in Gears of War 4. In ACM SIGGRAPH 2017 Talks (Los Angeles, California) (SIGGRAPH '17). ACM, New York, NY, USA, Article 22, 2 pages.
[6]
Michael Büttner. 2019. Machine Learning for Motion Synthesis and Character Control in Games. In Proc. of i3D 2019.
[7]
Michael Büttner and Simon Clavet. 2015. Motion Matching - The Road to Next Gen Animation. In Proc. of Nucl.ai 2015. https://www.youtube.com/watch?v=z_wpgHFSWss&t=658s
[8]
Jinxiang Chai and Jessica K. Hodgins. 2005. Performance Animation from Low-dimensional Control Signals. In ACM SIGGRAPH 2005 Papers (Los Angeles, California) (SIGGRAPH '05). ACM, New York, NY, USA, 686--696.
[9]
Simon Clavet. 2016. Motion Matching and The Road to Next-Gen Animation. In Proc. of GDC 2016.
[10]
Katerina Fragkiadaki, Sergey Levine, and Jitendra Malik. 2015. Recurrent Network Models for Kinematic Tracking. CoRR abs/1508.00271 (2015). arXiv:1508.00271 http://arxiv.org/abs/1508.00271
[11]
Nicholas Frechette. 2019. Animation Compression Library. (2019). https://nfrechette.github.io/
[12]
Keith Grochow, Steven L. Martin, Aaron Hertzmann, Aaron Hertzmann, and Zoran Popović. 2004. Style-based Inverse Kinematics. In ACM SIGGRAPH 2004 Papers (Los Angeles, California) (SIGGRAPH '04). ACM, New York, NY, USA, 522--531.
[13]
Geoff Harrower. 2018. Real Player Motion Tech in 'EA Sports UFC 3'. In Proc. of GDC 2018.
[14]
Félix G Harvey and Christopher Pal. 2018. Recurrent transition networks for character locomotion. In SIGGRAPH Asia 2018 Technical Briefs. ACM, 4.
[15]
Rachel Heck and Michael Gleicher. 2007. Parametric Motion Graphs. In Proceedings of the 2007 Symposium on Interactive 3D Graphics and Games (Seattle, Washington) (I3D '07). ACM, New York, NY, USA, 129--136.
[16]
Gustav Eje Henter, Simon Alexanderson, and Jonas Beskow. 2019. MoGlow: Probabilistic and controllable motion synthesis using normalising flows. CoRR abs/1905.06598 (2019). arXiv:1905.06598 http://arxiv.org/abs/1905.06598
[17]
Daniel Holden. 2018. Character Control with Neural Networks and Machine Learning. In Proc. of GDC 2018.
[18]
Daniel Holden, Taku Komura, and Jun Saito. 2017. Phase-functioned Neural Networks for Character Control. ACM Trans. Graph. 36, 4, Article 42 (July 2017), 13 pages.
[19]
Daniel Holden, Jun Saito, and Taku Komura. 2016. A Deep Learning Framework for Character Motion Synthesis and Editing. ACM Trans. Graph. 35, 4, Article 138 (July 2016), 11 pages.
[20]
Daniel Holden, Jun Saito, Taku Komura, and Thomas Joyce. 2015. Learning Motion Manifolds with Convolutional Autoencoders. In SIGGRAPH Asia 2015 Technical Briefs (Kobe, Japan) (SA '15). ACM, New York, NY, USA, Article 18, 4 pages.
[21]
Seokpyo Hong, Daseong Han, Kyungmin Cho, Joseph S. Shin, and Junyong Noh. 2019. Physics-based Full-body Soccer Motion Control for Dribbling and Shooting. ACM Trans. Graph. 38, 4, Article 74 (July 2019), 12 pages.
[22]
David Hunt, Richard Lico, and Michael Buttner. 2018. Topics in Real-time Animation. In ACM SIGGRAPH 2018 Courses (Vancouver, British Columbia, Canada) (SIGGRAPH '18). ACM, New York, NY, USA, Article 17, 1 pages.
[23]
Tamoor Hussain. 2019. The Last Of Us 2 Has An Awesome Improvement You Might Not Have Noticed. (2019). https://www.gamespot.com/articles/the-last-of-us-2-has-an-awesome-improvement-you-mi/1100-6470118/?utm_source=reddit.com
[24]
Kyunglyul Hyun, Kyungho Lee, and Jehee Lee. 2016. Motion Grammars for Character Animation. In Proceedings of the 37th Annual Conference of the European Association for Computer Graphics (Lisbon, Portugal) (EG '16). Eurographics Association, Goslar Germany, Germany, 103--113.
[25]
Hervé Jégou, Matthijs Douze, and Cordelia Schmid. 2011. Product Quantization for Nearest Neighbor Search. IEEE transactions on pattern analysis and machine intelligence 33 (01 2011), 117--28.
[26]
Andrew Kermse. 2004. Game Programming Gems 4. (2004), 95--101.
[27]
Lucas Kovar, Michael Gleicher, and Frédéric Pighin. 2002. Motion Graphs. In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques (San Antonio, Texas) (SIGGRAPH '02). ACM, New York, NY, USA, 473--482.
[28]
Guillaume Lample, Alexandre Sablayrolles, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2019. Large Memory Layers with Product Keys. CoRR abs/1907.05242 (2019). arXiv:1907.05242 http://arxiv.org/abs/1907.05242
[29]
Jehee Lee, Jinxiang Chai, Paul S. A. Reitsma, Jessica K. Hodgins, and Nancy S. Pollard. 2002. Interactive Control of Avatars Animated with Human Motion Data. In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques (San Antonio, Texas) (SIGGRAPH '02). ACM, New York, NY, USA, 491--500.
[30]
Jehee Lee and Kang Hoon Lee. 2004. Precomputing Avatar Behavior from Human Motion Data. In Proceedings of the 2004 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (Grenoble, France) (SCA '04). Eurographics Association, Goslar Germany, Germany, 79--87.
[31]
Kyungho Lee, Seyoung Lee, and Jehee Lee. 2018. Interactive Character Animation by Learning Multi-objective Control. ACM Trans. Graph. 37, 6, Article 180 (Dec. 2018), 10 pages.
[32]
Yongjoon Lee, Kevin Wampler, Gilbert Bernstein, Jovan Popović, and Zoran Popović. 2010. Motion Fields for Interactive Character Locomotion. In ACM SIGGRAPH Asia 2010 Papers (Seoul, South Korea) (SIGGRAPH ASIA '10). ACM, New York, NY, USA, Article 138, 8 pages.
[33]
Sergey Levine, Jack M. Wang, Alexis Haraux, Zoran Popović, and Vladlen Koltun. 2012. Continuous Character Control with Low-dimensional Embeddings. ACM Trans. Graph. 31, 4, Article 28 (July 2012), 10 pages.
[34]
Yanran Li, Zhao Wang, Xiaosong Yang, Meili Wang, Sebastian Iulian Poiana, Ehtzaz Chaudhry, and Jianjun Zhang. 2019. Efficient convolutional hierarchical autoencoder for human motion prediction. The Visual Computer 35, 6 (01 Jun 2019), 1143--1156.
[35]
Zimo Li, Yi Zhou, Shuangjiu Xiao, Chong He, and Hao Li. 2017. Auto-Conditioned LSTM Network for Extended Complex Human Motion Synthesis. CoRR abs/1707.05363 (2017). arXiv:1707.05363 http://arxiv.org/abs/1707.05363
[36]
Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2019. On the Variance of the Adaptive Learning Rate and Beyond. arXiv:cs.LG/1908.03265
[37]
Wan-Yen Lo and Matthias Zwicker. 2008. Real-time Planning for Parameterized Human Motion. In Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (Dublin, Ireland) (SCA '08). Eurographics Association, Aire-la-Ville, Switzerland, Switzerland, 29--38. http://dl.acm.org/citation.cfm?id=1632592.1632598
[38]
Mark Miller, Daniel Holden, Rami Al-Ashqar, Christophe Dubach, Kenny Mitchell, and Taku Komura. 2015. Carpet Unrolling for Character Control on Uneven Terrain. In Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games (Paris, France) (MIG '15). Association for Computing Machinery, New York, NY, USA, 193--198.
[39]
Jianyuan Min and Jinxiang Chai. 2012. Motion Graphs++: A Compact Generative Model for Semantic Motion Analysis and Synthesis. ACM Trans. Graph. 31, 6, Article 153 (Nov. 2012), 12 pages.
[40]
Tomohiko Mukai. 2011. Motion Rings for Interactive Gait Synthesis. In Symposium on Interactive 3D Graphics and Games (San Francisco, California) (I3D '11). ACM, New York, NY, USA, 125--132.
[41]
Tomohiko Mukai and Shigeru Kuriyama. 2005. Geostatistical Motion Interpolation. In ACM SIGGRAPH 2005 Papers (Los Angeles, California) (SIGGRAPH '05). ACM, New York, NY, USA, 1062--1070.
[42]
Soohwan Park, Hoseok Ryu, Seyoung Lee, Sunmin Lee, and Jehee Lee. 2019. Learning Predict-and-Simulate Policies from Unorganized Human Motion Data. ACM Trans. Graph. 38, 6, Article Article 205 (Nov. 2019), 11 pages.
[43]
Sang Il Park, Hyun Joon Shin, and Sung Yong Shin. 2002. On-line Locomotion Generation Based on Motion Blending. In Proceedings of the 2002 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (San Antonio, Texas) (SCA '02). ACM, New York, NY, USA, 105--111.
[44]
Dario Pavllo, Christoph Feichtenhofer, Michael Auli, and David Grangier. 2019. Modeling Human Motion with Quaternion-based Neural Networks. CoRR abs/1901.07677 (2019). arXiv:1901.07677 http://arxiv.org/abs/1901.07677
[45]
Dario Pavllo, David Grangier, and Michael Auli. 2018. QuaterNet: A Quaternion-based Recurrent Model for Human Motion. CoRR abs/1805.06485 (2018). arXiv:1805.06485 http://arxiv.org/abs/1805.06485
[46]
Charles Rose, Michael F. Cohen, and Bobby Bodenheimer. 1998. Verbs and Adverbs: Multidimensional Motion Interpolation. IEEE Comput. Graph. Appl. 18, 5 (Sept. 1998), 32--40.
[47]
Alla Safonova and Jessica K. Hodgins. 2007. Construction and Optimal Search of Interpolated Motion Graphs. In ACM SIGGRAPH 2007 Papers (San Diego, California) (SIGGRAPH '07). ACM, New York, NY, USA, Article 106.
[48]
Hyun Joon Shin and Hyun Seok Oh. 2006. Fat Graphs: Constructing an Interactive Character with Continuous Controls. In Proceedings of the 2006 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (Vienna, Austria) (SCA '06). Eurographics Association, Goslar, DEU, 291--298.
[49]
Sebastian Starke, He Zhang, Taku Komura, and Jun Saito. 2019. Neural State Machine for Character-scene Interactions. ACM Trans. Graph. 38, 6, Article 209 (Nov. 2019), 14 pages.
[50]
Jochen Tautges, Arno Zinke, Björn Krüger, Jan Baumann, Andreas Weber, Thomas Helten, Meinard Müller, Hans-Peter Seidel, and Bernd Eberhardt. 2011. Motion Reconstruction Using Sparse Accelerometer Data. ACM Trans. Graph. 30, 3, Article 18 (May 2011), 12 pages.
[51]
Graham W. Taylor and Geoffrey E. Hinton. 2009. Factored Conditional Restricted Boltzmann Machines for Modeling Motion Style. In Proceedings of the 26th Annual International Conference on Machine Learning (Montreal, Quebec, Canada) (ICML '09). ACM, New York, NY, USA, 1025--1032.
[52]
Adrien Treuille, Yongjoon Lee, and Zoran Popović. 2007. Near-Optimal Character Animation with Continuous Control. In ACM SIGGRAPH 2007 Papers (San Diego, California) (SIGGRAPH '07). Association for Computing Machinery, New York, NY, USA, 7--es.
[53]
Jack M. Wang, David J. Fleet, and Aaron Hertzmann. 2008. Gaussian Process Dynamical Models for Human Motion. IEEE Trans. Pattern Anal. Mach. Intell. 30, 2 (Feb. 2008), 283--298.
[54]
Zhiyong Wang, Jinxiang Chai, and Shihong Xia. 2018. Combining Recurrent Neural Networks and Adversarial Training for Human Motion Synthesis and Control. CoRR abs/1806.08666 (2018). arXiv:1806.08666 http://arxiv.org/abs/1806.08666
[55]
Gwonjin Yi and Junghoon Jee. 2019. Search Space Reduction In Motion Matching by Trajectory Clustering. In SIGGRAPH Asia 2019 Posters (Brisbane, QLD, Australia) (SA '19). ACM, New York, NY, USA, Article 4, 2 pages.
[56]
Kristjan Zadziuk. 2016. Motion Matching, The Future of Games Animation... Today. In Proc. of GDC 2016.
[57]
He Zhang, Sebastian Starke, Taku Komura, and Jun Saito. 2018. Mode-adaptive Neural Networks for Quadruped Motion Control. ACM Trans. Graph. 37, 4, Article 145 (July 2018), 11 pages.
[58]
Fabio Zinno. 2019. ML Tutorial Day: From Motion Matching to Motion Synthesis, and All the Hurdles In Between. In Proc. of GDC 2019.
[59]
Victor Brian Zordan, Anna Majkowska, Bill Chiu, and Matthew Fast. 2005. Dynamic Response for Motion Capture Animation. ACM Trans. Graph. 24, 3 (July 2005), 697--701.

Cited By

View all
  • (2025)Multimodal fusion of inertial sensors and single RGB camera data for 3D human pose estimation based on a hybrid LSTM-Random forest fusion networkInternet of Things10.1016/j.iot.2024.10146529(101465)Online publication date: Jan-2025
  • (2024)Machine Learning-Based Hand Pose Generation Using a Haptic ControllerElectronics10.3390/electronics1310197013:10(1970)Online publication date: 17-May-2024
  • (2024)A learning-based control pipeline for generic motor skills for quadruped robots基于学习的四足机器人通用技能控制方法Journal of Zhejiang University-SCIENCE A10.1631/jzus.A230012825:6(443-454)Online publication date: 12-Feb-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Graphics
ACM Transactions on Graphics  Volume 39, Issue 4
August 2020
1732 pages
ISSN:0730-0301
EISSN:1557-7368
DOI:10.1145/3386569
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 August 2020
Published in TOG Volume 39, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. animation
  2. character animation
  3. generative models
  4. motion matching
  5. neural networks

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)440
  • Downloads (Last 6 weeks)39
Reflects downloads up to 08 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Multimodal fusion of inertial sensors and single RGB camera data for 3D human pose estimation based on a hybrid LSTM-Random forest fusion networkInternet of Things10.1016/j.iot.2024.10146529(101465)Online publication date: Jan-2025
  • (2024)Machine Learning-Based Hand Pose Generation Using a Haptic ControllerElectronics10.3390/electronics1310197013:10(1970)Online publication date: 17-May-2024
  • (2024)A learning-based control pipeline for generic motor skills for quadruped robots基于学习的四足机器人通用技能控制方法Journal of Zhejiang University-SCIENCE A10.1631/jzus.A230012825:6(443-454)Online publication date: 12-Feb-2024
  • (2024)Actuators A La Mode: Modal Actuations for Soft Body LocomotionSIGGRAPH Asia 2024 Conference Papers10.1145/3680528.3687638(1-10)Online publication date: 3-Dec-2024
  • (2024)EmoSpaceTime: Decoupling Emotion and Content through Contrastive Learning for Expressive 3D Speech AnimationProceedings of the 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games10.1145/3677388.3696336(1-12)Online publication date: 21-Nov-2024
  • (2024)Real-time Diverse Motion In-betweening with Space-time ControlProceedings of the 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games10.1145/3677388.3696327(1-8)Online publication date: 21-Nov-2024
  • (2024)Categorical Codebook Matching for Embodied Character ControllersACM Transactions on Graphics10.1145/365820943:4(1-14)Online publication date: 19-Jul-2024
  • (2024)UbiPhysioProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435528:1(1-27)Online publication date: 6-Mar-2024
  • (2024)TEDi: Temporally-Entangled Diffusion for Long-Term Motion SynthesisACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657515(1-11)Online publication date: 13-Jul-2024
  • (2024)Promptable Game Models: Text-guided Game Simulation via Masked Diffusion ModelsACM Transactions on Graphics10.1145/363570543:2(1-16)Online publication date: 3-Jan-2024
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media