skip to main content
10.1145/3623264.3624442acmconferencesArticle/Chapter ViewAbstractPublication PagesmigConference Proceedingsconference-collections
research-article

Learning Robust and Scalable Motion Matching with Lipschitz Continuity and Sparse Mixture of Experts

Published: 15 November 2023 Publication History

Abstract

Motion matching(Büttner and Clavet [2015]; Clavet [2016]) has become a widely adopted technique for generating high-quality interactive animation systems in video games. However, its current implementations suffer from significant computational and memory resource overheads, limiting its scalability in the context of modern video game performance profiles.
"Learned Motion Matching"[Holden et al. 2020] mitigated some of these challenges, however, whilst reducing memory requirements, it resulted in increases in performance costs. In this paper, we propose a novel method for learning motion matching that combines a Sparse Mixture of Experts model architecture and a Lipschitz-continuous latent space for representation of poses.
This approach significantly reduces the computational complexity of the models, while simultaneously improving the compactness of the data that can be stored and the robustness of pose output. As a result, our method enables the efficient execution of motion matching that significantly outperforms other implementations for large character counts, by 8.5x times in CPU execution cost and at 80% of the memory requirements of "Learned Motion Matching", on contemporary video game hardware, thereby enhancing its practical applicability and scalability in the gaming industry.

References

[1]
Jose Luis Blanco and Pranjal Kumar Rai. 2014. nanoflann: a C++ header-only fork of FLANN, a library for Nearest Neighbor (NN) with KD-trees. https://github.com/jlblancoc/nanoflann.
[2]
David Bollo. 2018. Inertialization: High-Performance Animation Transitions in Gears of War. GDC ’18 (March 2018). https://www.youtube.com/watch?v=BYyv4KTegJI
[3]
Michael Büttner and Simon Clavet. 2015. Motion matching - the road to next gen animation. Nucl.ai ’2015 (July 2015).
[4]
Simon Clavet. 2016. Motion Matching and The Road to Next-Gen Animation. GDC ’16 (March 2016). https://www.gdcvault.com/play/1023280/Motion-Matching-and-The-Road
[5]
ONNX Runtime developers. 2021. ONNX Runtime. https://onnxruntime.ai/. Version: x.y.z.
[6]
William Fedus, Barret Zoph, and Noam Shazeer. 2022. Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity. J. Mach. Learn. Res. 23 (2022), 120:1–120:39. http://jmlr.org/papers/v23/21-0998.html
[7]
Nicholas Frechette and Animation Compression Library contributors. 2017. Animation Compression Library. https://github.com/nfrechette/acl.
[8]
Henry Gouk, Eibe Frank, Bernhard Pfahringer, and Michael J. Cree. 2020. Regularisation of neural networks by enforcing Lipschitz continuity. Machine Learning 110, 2 (Dec. 2020), 393–416. https://doi.org/10.1007/s10994-020-05929-w
[9]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. arxiv:1502.01852 [cs.CV]
[10]
Thorsten Hempel, Ahmed A. Abdelrahman, and Ayoub Al-Hamadi. 2022. 6d Rotation Representation For Unconstrained Head Pose Estimation. In 2022 IEEE International Conference on Image Processing (ICIP). 2496–2500. https://doi.org/10.1109/ICIP46576.2022.9897219
[11]
Dan Hendrycks and Kevin Gimpel. 2023. Gaussian Error Linear Units (GELUs). arxiv:1606.08415 [cs.LG]
[12]
Daniel Holden, Oussama Kanoun, Maksym Perepichka, and Tiberiu Popa. 2020. Learned Motion Matching. ACM Trans. Graph. 39, 4, Article 1 (July 2020). https://doi.org/10.1145/3386569.3392440
[13]
Daniel Holden, Taku Komura, and Jun Saito. 2017. Phase-functioned neural networks for character control. ACM Transactions on Graphics 36, 4 (July 2017), 1–13. https://doi.org/10.1145/3072959.3073663
[14]
Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data 7, 3 (2019), 535–547.
[15]
Tobias Kleanthous. 2021. Making The Believable Horses of Red Dead Redemption II. GDC ’21 (July 2021). https://www.youtube.com/watch?v=8vtCqfFAjKQ
[16]
Yongjoon Lee, Kevin Wampler, Gilbert Bernstein, Jovan Popović, and Zoran Popović. 2010. Motion fields for interactive character locomotion. In ACM SIGGRAPH Asia 2010 papers on - SIGGRAPH ASIA '10. ACM Press. https://doi.org/10.1145/1882262.1866160
[17]
Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2021. GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. https://openreview.net/forum?id=qrwe7XHTmYb
[18]
Peizhuo Li, Kfir Aberman, Zihan Zhang, Rana Hanocka, and Olga SorkineHornung. 2022. GANimator: Neural Motion Synthesis from a Single Sequence. ACM Trans. Graph. 41, 4, Article 138 (July 2022). https://doi.org/10.1145/3528223.3530157
[19]
Hung Yu Ling, Fabio Zinno, George Cheng, and Michiel van de Panne. 2020. Character Controllers Using Motion VAEs. ACM Trans. Graph. 39, 4, Article 40 (July 2020). https://doi.org/10.1145/3386569.3392422
[20]
Hsueh-Ti Derek Liu, Francis Williams, Alec Jacobson, Sanja Fidler, and Or Litany. 2022. Learning Smooth Neural Functions via Lipschitz Regularization. In Special Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings (SIGGRAPH ’22 Conference Proceedings). https://doi.org/10.1145/3528233.3530713
[21]
Antoine Maiorca, Nathan Hubens, Sohaib Laraba, and Thierry Dutoit. 2022. Towards Lightweight Neural Animation: Exploration of Neural Network Pruning in Mixture of Experts-based Animation Models. In Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2022, Volume 1: GRAPP, Online Streaming, February 6-8, 2022, A. Augusto de Sousa, Kurt Debattista, and Kadi Bouatouch (Eds.). SCITEPRESS, 286–293. https://doi.org/10.5220/0010908700003124
[22]
Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. 2018. Spectral Normalization for Generative Adversarial Networks. arxiv:1802.05957 [cs.LG]
[23]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32. Curran Associates, Inc., 8024–8035. http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
[24]
Dario Pavllo, David Grangier, and Michael Auli. 2018. QuaterNet: A Quaternion-based Recurrent Model for Human Motion. arxiv:1805.06485 [cs.CV]
[25]
Nagaraj Raparthi, Eric Acosta, Alan Liu, and Tim McLaughlin. 2020. GPU-based Motion Matching for Crowds in the Unreal Engine. In SIGGRAPH Asia 2020 Posters. ACM. https://doi.org/10.1145/3415264.3425474
[26]
Sebastian Starke, Ian Mason, and Taku Komura. 2022. DeepPhase: periodic autoencoders for learning motion phase manifolds. ACM Trans. Graph. 41, 4, Article 136 (July 2022). https://doi.org/10.1145/3528223.3530178
[27]
Sebastian Starke, Yiwei Zhao, Taku Komura, and Kazi Zaman. 2020. Local motion phases for learning multi-contact character movements. ACM Transactions on Graphics 39, 4 (Aug. 2020). https://doi.org/10.1145/3386569.3392450
[28]
D Arul Suju and Hancy Jose. 2017. FLANN: Fast approximate nearest neighbour search algorithm for elucidating human-wildlife conflicts in forest areas. In 2017 Fourth International Conference on Signal Processing, Communication and Networking (ICSCN). 1–6. https://doi.org/10.1109/ICSCN.2017.8085676
[29]
Xingyu Xie, Pan Zhou, Huan Li, Zhouchen Lin, and Shuicheng Yan. 2023. Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models. arxiv:2208.06677 [cs.LG]
[30]
Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, and Hao Li. 2020. On the Continuity of Rotation Representations in Neural Networks. arxiv:1812.07035 [cs.LG]
[31]
Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer, and William Fedus. 2022. ST-MoE: Designing Stable and Transferable Sparse Expert Models. arxiv:2202.08906 [cs.CL]

Cited By

View all
  • (2024)Making motion matching stable and fast with Lipschitz-continuous neural networks and Sparse Mixture of ExpertsComputers and Graphics10.1016/j.cag.2024.103911120:COnline publication date: 18-Nov-2024

Index Terms

  1. Learning Robust and Scalable Motion Matching with Lipschitz Continuity and Sparse Mixture of Experts

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        MIG '23: Proceedings of the 16th ACM SIGGRAPH Conference on Motion, Interaction and Games
        November 2023
        224 pages
        ISBN:9798400703935
        DOI:10.1145/3623264
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 15 November 2023

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. animation
        2. character animation
        3. motion matching
        4. neural networks
        5. regularization

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Conference

        MIG '23
        Sponsor:

        Acceptance Rates

        Overall Acceptance Rate -9 of -9 submissions, 100%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)143
        • Downloads (Last 6 weeks)3
        Reflects downloads up to 28 Feb 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Making motion matching stable and fast with Lipschitz-continuous neural networks and Sparse Mixture of ExpertsComputers and Graphics10.1016/j.cag.2024.103911120:COnline publication date: 18-Nov-2024

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media