skip to main content
10.1145/3623264.3624438acmconferencesArticle/Chapter ViewAbstractPublication PagesmigConference Proceedingsconference-collections
short-paper

Generating Emotionally Expressive Look-At Animation

Published: 15 November 2023 Publication History

Abstract

Humanoid characters in video games are generally animated using motion capture technology, enabling high-quality, realistic animation. This animation has to remain interactive and allow characters to react to their environment; one important component of this adaptation are look-at animations which direct the character’s torso towards an object or person of interest. Look-at animations are generated procedurally in order to handle any desired target direction, however, this procedural animation can have a robotic quality and negatively affect the overall perceived realism of the character. In this work, we present a neural network controller for generating look-at animations that are equally as appealing as motion capture while requiring minimal memory. Moreover, our controller can generate animations stylized by emotion, allowing characters to react to look-at targets depending on their context, and this style expressiveness is shown to be on par with motion captured samples.

Supplementary Material

MP4 File (MIG_video_slides_voicedFinal.mp4)
short paper video, supplemental graphs

References

[1]
Sean Andrist, Tomislav Pejsa, Bilge Mutlu, and Michael Gleicher. 2012. Designing effective gaze mechanisms for virtual agents. In Proceedings of the SIGCHI conference on Human factors in computing systems. 705–714.
[2]
Ific Goudé, Alexandre Bruckert, Anne-Hélène Olivier, Julien Pettré, Rémi Cozot, Kadi Bouatouch, Marc Christie, and Ludovic Hoyet. 2023. Real-time Multi-map Saliency-driven Gaze Behavior for Non-conversational Characters. IEEE Transactions on Visualization and Computer Graphics (2023).
[3]
Arthur Gretton, Karsten Borgwardt, Malte Rasch, Bernhard Schölkopf, and Alex Smola. 2006. A kernel method for the two-sample-problem. Advances in neural information processing systems 19 (2006).
[4]
Patrik Jonell, Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, and Gustav Eje Henter. 2021. HEMVIP: Human Evaluation of Multiple Videos in Parallel. In Proceedings of the 2021 International Conference on Multimodal Interaction (Montréal, QC, Canada) (ICMI ’21). Association for Computing Machinery, New York, NY, USA, 707–711. https://doi.org/10.1145/3462244.3479957
[5]
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
[6]
Alex Klein, Zerrin Yumak, Arjen Beij, and A Frank van der Stappen. 2019. Data-driven gaze animation using recurrent neural networks. In Proceedings of the 12th ACM SIGGRAPH Conference on Motion, Interaction and Games. 1–11.
[7]
Brent Lance and Stacy Marsella. 2010. The expressive gaze model: Using gaze to express emotion. IEEE computer graphics and applications 30, 4 (2010), 62–73.
[8]
Tomislav Pejsa, Sean Andrist, Michael Gleicher, and Bilge Mutlu. 2015. Gaze and attention management for embodied conversational agents. ACM Transactions on Interactive Intelligent Systems (TiiS) 5, 1 (2015), 1–34.
[9]
Tomislav Pejsa, Bilge Mutlu, and Michael Gleicher. 2013. Stylized and performative gaze for character animation. In Computer Graphics Forum, Vol. 32. Wiley Online Library, 143–152.
[10]
Christopher Peters and Adam Qureshi. 2010. A head movement propensity model for animating gaze shifts and blinks of virtual characters. Computers & Graphics 34, 6 (2010), 677–687.
[11]
Robert Plutchik. 2001. The nature of emotions: Human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. American scientist 89, 4 (2001), 344–350.
[12]
B Series. 2014. Method for the subjective assessment of intermediate quality level of audio systems. International Telecommunication Union Radiocommunication Assembly (2014).
[13]
Marcus Thiebaux, Brent Lance, and Stacy Marsella. 2009. Real-time expressive gaze animation for virtual humans. In Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems-Volume 1. 321–328.
[14]
Sang Hoon Yeo, Martin Lesmana, Debanga R Neog, and Dinesh K Pai. 2012. Eyecatch: Simulating visuomotor coordination for object interception. ACM Transactions on Graphics (TOG) 31, 4 (2012), 1–10.

Cited By

View all
  • (2024)S3: Speech, Script and Scene driven Head and Eye AnimationACM Transactions on Graphics10.1145/365817243:4(1-12)Online publication date: 19-Jul-2024

Index Terms

  1. Generating Emotionally Expressive Look-At Animation

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MIG '23: Proceedings of the 16th ACM SIGGRAPH Conference on Motion, Interaction and Games
      November 2023
      224 pages
      ISBN:9798400703935
      DOI:10.1145/3623264
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 15 November 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. computer animation
      2. expressive agents
      3. gaze animation
      4. machine learning

      Qualifiers

      • Short-paper
      • Research
      • Refereed limited

      Conference

      MIG '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate -9 of -9 submissions, 100%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)66
      • Downloads (Last 6 weeks)5
      Reflects downloads up to 28 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)S3: Speech, Script and Scene driven Head and Eye AnimationACM Transactions on Graphics10.1145/365817243:4(1-12)Online publication date: 19-Jul-2024

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media