skip to main content
10.1145/3487983.3488291acmconferencesArticle/Chapter ViewAbstractPublication PagesmigConference Proceedingsconference-collections
research-article
Best Paper

Motor Babble: Morphology-Driven Coordinated Control of Articulated Characters

Published:10 November 2021Publication History

ABSTRACT

Locomotion in humans and animals is highly coordinated, with many joints moving together. Learning similar coordinated locomotion in articulated virtual characters, in the absence of reference motion data, is a challenging task due to the high number of degrees of freedom and the redundancy that comes with it. In this paper, we present a method for learning locomotion for virtual characters in a low dimensional latent space which defines how different joints move together. We introduce a technique called motor babble, wherein a character interacts with its environment by actuating its joints through uncoordinated, low-level (motor) excitations, resulting in a corpus of motion data from which a manifold latent space is extracted. Dimensions of the extracted manifold define a wide variety of synergies pertaining to the character and, through reinforcement learning, we train the character to learn locomotion in the latent space by selecting a small set of appropriate latent dimensions, along with learning the corresponding policy.

Skip Supplemental Material Section

Supplemental Material

a6-ranganath-video.mp4

mp4

25.2 MB

References

  1. Stelian Coros, Andrej Karpathy, Ben Jones, Lionel Reveret, and Michiel Van De Panne. 2011. Locomotion skills for simulated quadrupeds. ACM Transactions on Graphics (TOG) 30, 4 (2011), 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Alessandro Crespi, André Badertscher, André Guignard, and Auke Jan Ijspeert. 2005. AmphiBot I: an amphibious snake-like robot. Robotics and Autonomous Systems 50, 4 (2005), 163–175.Google ScholarGoogle ScholarCross RefCross Ref
  3. Alessandro Crespi and Auke Jan Ijspeert. 2006. AmphiBot II: An amphibious snake robot that crawls and swims using a central pattern generator. In Proceedings of the 9th international conference on climbing and walking robots (CLAWAR 2006). 19–27.Google ScholarGoogle Scholar
  4. Alessandro Crespi and Auke Jan Ijspeert. 2008. Online optimization of swimming and crawling in an amphibious snake robot. IEEE Transactions on Robotics 24, 1 (2008), 75–87.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. 2016. Benchmarking deep reinforcement learning for continuous control. In International conference on machine learning. PMLR, 1329–1338.Google ScholarGoogle Scholar
  6. Thomas Geijtenbeek and Nicolas Pronost. 2012. Interactive character animation using simulated physics: A state-of-the-art review. In Computer graphics forum, Vol. 31. Wiley Online Library, 2492–2515.Google ScholarGoogle Scholar
  7. Radek Grzeszczuk and Demetri Terzopoulos. 1995. Automated learning of muscle-actuated locomotion through control abstraction. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques. 63–70.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. 2018. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International Conference on Machine Learning. PMLR, 1861–1870.Google ScholarGoogle Scholar
  9. Nikolaus Hansen. 2006. The CMA evolution strategy: a comparing review. Towards a new evolutionary computation(2006), 75–102.Google ScholarGoogle Scholar
  10. Nicolas Heess, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, SM Eslami, Martin Riedmiller, 2017. Emergence of locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286(2017).Google ScholarGoogle Scholar
  11. Daniel Holden, Taku Komura, and Jun Saito. 2017. Phase-functioned Neural Networks for Character Control. ACM Transactions on Graphics 36, 4, Article 42(2017), 13 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Fumiya Iida and Rolf Pfeifer. 2004. Cheap rapid locomotion of a quadruped robot: Self-stabilization of bounding gait. In Intelligent autonomous systems, Vol. 8. IOS Press Amsterdam, The Netherlands, 642–649.Google ScholarGoogle Scholar
  13. Auke Jan Ijspeert. 2008. Central pattern generators for locomotion control in animals and robots: a review. Neural networks 21, 4 (2008), 642–653.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Auke Jan Ijspeert, Alessandro Crespi, Dimitri Ryczko, and Jean-Marie Cabelguen. 2007. From swimming to walking with a salamander robot driven by a spinal cord model. science 315, 5817 (2007), 1416–1420.Google ScholarGoogle Scholar
  15. Sumit Jain and C Karen Liu. 2011. Modal-space control for articulated characters. ACM Transactions on Graphics 30, 5 (2011), 118.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Eunjung Ju, Jungdam Won, Jehee Lee, Byungkuk Choi, Junyong Noh, and Min Gyu Choi. 2013. Data-driven control of flapping flight. ACM Transactions on Graphics (TOG) 32, 5 (2013), 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Nate Kohl and Peter Stone. 2004. Policy gradient reinforcement learning for fast quadrupedal locomotion. In IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04. 2004, Vol. 3. IEEE, 2619–2624.Google ScholarGoogle ScholarCross RefCross Ref
  18. Paul G Kry, Lionel Revéret, François Faure, and M-P Cani. 2009. Modal locomotion: Animating virtual characters with natural vibrations. In Computer Graphics Forum, Vol. 28. 289–298.Google ScholarGoogle Scholar
  19. Yoonsang Lee, Sungeun Kim, and Jehee Lee. 2010. Data-driven biped control. ACM Transactions on Graphics 29, 4 (2010), 129.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Ying-Sheng Luo, Jonathan Hans Soeseno, Trista Pei-Chun Chen, and Wei-Chao Chen. 2020. CARL: Controllable Agent with Reinforcement Learning for Quadruped Locomotion. ACM Transactions on Graphics 39, 4 (2020), 10 pages.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Gavin SP Miller. 1988. The motion dynamics of snakes and worms. In Proceedings of the 15th annual conference on Computer graphics and interactive techniques. 169–173.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Sehee Min, Jungdam Won, Seunghwan Lee, Jungnam Park, and Jehee Lee. 2019. Softcon: Simulation and control of soft-bodied animals with biomimetic actuators. ACM Transactions on Graphics (TOG) 38, 6 (2019), 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Rubens F Nunes, Joaquim B Cavalcante-Neto, Creto A Vidal, Paul G Kry, and Victor B Zordan. 2012. Using natural vibrations to guide control for locomotion. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games. 87–94.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Tony Owen. 1994. Biologically Inspired Robots: Snake-Like Locomotors and Manipulators by Shigeo Hirose Oxford University Press, Oxford, 1993, 220pages, incl. index (£ 40). Robotica 12, 3 (1994), 282–282.Google ScholarGoogle Scholar
  25. Soohwan Park, Hoseok Ryu, Seyoung Lee, Sunmin Lee, and Jehee Lee. 2019. Learning Predict-and-Simulate Policies From Unorganized Human Motion Data. ACM Transactions on Graphics 38, 6, Article 205(2019).Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel van de Panne. 2018a. DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills. ACM Transactions on Graphics(2018).Google ScholarGoogle Scholar
  27. Xue Bin Peng, Glen Berseth, and Michiel van de Panne. 2016. Terrain-Adaptive Locomotion Skills Using Deep Reinforcement Learning. ACM Transactions on Graphics (Proc. SIGGRAPH 2016) 35, 4 (2016).Google ScholarGoogle Scholar
  28. Xue Bin Peng, Glen Berseth, KangKang Yin, and Michiel Van De Panne. 2017. Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning. ACM Transactions on Graphics 36, 4 (2017), 41.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Xue Bin Peng, Michael Chang, Grace Zhang, Pieter Abbeel, and Sergey Levine. 2019. MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies. In Annual Conference on Neural Information Processing Systems, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett (Eds.). 3681–3692.Google ScholarGoogle Scholar
  30. Xue Bin Peng, Erwin Coumans, Tingnan Zhang, Tsang-Wei Edward Lee, Jie Tan, and Sergey Levine. 2020. Learning Agile Robotic Locomotion Skills by Imitating Animals. In Robotics: Science and Systems. https://doi.org/10.15607/RSS.2020.XVI.064Google ScholarGoogle Scholar
  31. Xue Bin Peng, Angjoo Kanazawa, Jitendra Malik, Pieter Abbeel, and Sergey Levine. 2018b. SFV: Reinforcement Learning of Physical Skills from Videos. ACM Transactions on Graphics(2018).Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Marc H Raibert. 1986. Legged robots. Commun. ACM 29, 6 (1986), 499–514.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Marc H Raibert and Jessica K Hodgins. 1991. Animation of dynamic legged locomotion. In Proceedings of the 18th annual conference on Computer graphics and interactive techniques. 349–358.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Avinash Ranganath, Pei Xu, Ioannis Karamouzas, and Victor Zordan. 2019. Low Dimensional Motor Skill Learning Using Coactivation. In Motion, Interaction and Games. 1–10.Google ScholarGoogle Scholar
  35. Ryo Saegusa, Giorgio Metta, and Giulio Sandini. 2009. Active learning for multiple sensorimotor coordination based on state confidence. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2598–2603.Google ScholarGoogle ScholarCross RefCross Ref
  36. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. 2015. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438(2015).Google ScholarGoogle Scholar
  37. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017a. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347(2017).Google ScholarGoogle Scholar
  38. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017b. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347(2017).Google ScholarGoogle Scholar
  39. Kwang Won Sok, Manmyung Kim, and Jehee Lee. 2007. Simulating Biped Behaviors from Human Motion Data. ACM Transactions on Graphics 26, 3 (2007), 107–es. https://doi.org/10.1145/1276377.1276511Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Jie Tan, Yuting Gu, Greg Turk, and C Karen Liu. 2011. Articulated swimming creatures. ACM Transactions on Graphics (TOG) 30, 4 (2011), 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Michiel Van De Panne. 1996. Parameterized gait synthesis. IEEE Computer Graphics and Applications 16, 2 (1996), 40–49.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Michiel Van de Panne, Ryan Kim, and Eugene Fiume. 1994. Virtual wind-up toys for animation. In Graphics Interface. Citeseer, 208–208.Google ScholarGoogle Scholar
  43. Kevin Wampler and Zoran Popović. 2009. Optimal gait and form for animal locomotion. ACM Transactions on Graphics (TOG) 28, 3 (2009), 1–8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Kevin Wampler, Zoran Popović, and Jovan Popović. 2014. Generalizing locomotion style to new animals with inverse optimal regression. ACM Transactions on Graphics 33, 4 (2014), 1–11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Jungdam Won, Deepak Gopinath, and Jessica Hodgins. 2020. A scalable approach to control diverse behaviors for physically simulated characters. ACM Transactions on Graphics 39, 4 (2020), 33–1.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Jungdam Won, Jongho Park, Kwanyu Kim, and Jehee Lee. 2017. How to train your dragon: example-guided control of flapping flight. ACM Transactions on Graphics (TOG) 36, 6 (2017), 1–13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Jungdam Won, Jungnam Park, and Jehee Lee. 2018. Aerobatics control of flying creatures via self-regulated learning. ACM Transactions on Graphics (TOG) 37, 6 (2018), 1–10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Jia-chi Wu and Zoran Popović. 2003. Realistic modeling of bird flight animations. ACM Transactions on Graphics (TOG) 22, 3 (2003), 888–895.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Wenhao Yu, Greg Turk, and C Karen Liu. 2018. Learning symmetric and low-energy locomotion. ACM Transactions on Graphics 37, 4 (2018), 144.Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    MIG '21: Proceedings of the 14th ACM SIGGRAPH Conference on Motion, Interaction and Games
    November 2021
    166 pages
    ISBN:9781450391313
    DOI:10.1145/3487983

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 10 November 2021

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate-9of-9submissions,100%

    Upcoming Conference

    MIG '24
  • Article Metrics

    • Downloads (Last 12 months)29
    • Downloads (Last 6 weeks)1

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format