skip to main content
10.1145/3551626.3564976acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
demonstration

Emotional Talking Faces: Making Videos More Expressive and Realistic

Published: 13 December 2022 Publication History

Abstract

Lip synchronization and talking face generation have gained a specific interest from the research community with the advent and need of digital communication in different fields. Prior works propose several elegant solutions to this problem. However, they often fail to create realistic-looking videos that account for people's expressions and emotions. To mitigate this, we build a talking face generation framework conditioned on a categorical emotion to generate videos with appropriate expressions, making them more real-looking and convincing. With a broad range of six emotions i.e., anger, disgust, fear, happiness, neutral, and sad, we show that our model generalizes across identities, emotions, and languages.

References

[1]
Triantafyllos Afouras, Joon Son Chung, Andrew Senior, Oriol Vinyals, and Andrew Zisserman. 2018. Deep audio-visual speech recognition. IEEE transactions on pattern analysis and machine intelligence (2018).
[2]
Houwei Cao, David G Cooper, Michael K Keutmann, Ruben C Gur, Ani Nenkova, and Ragini Verma. 2014. Crema-d: Crowd-sourced emotional multimodal actors dataset. IEEE transactions on affective computing 5, 4 (2014), 377--390.
[3]
Lele Chen, Haitian Zheng, Ross Maddox, Zhiyao Duan, and Chenliang Xu. 2019. Sound to Visual: Hierarchical Cross-Modal Talking Face Generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.
[4]
Ohad Fried, Ayush Tewari, Michael Zollhöfer, Adam Finkelstein, Eli Shechtman, Dan B Goldman, Kyle Genova, Zeyu Jin, Christian Theobalt, and Maneesh Agrawala. 2019. Text-Based Editing of Talking-Head Video. ACM Trans. Graph. 38, 4, Article 68 (jul 2019), 14 pages.
[5]
Amir Jamaludin, Joon Son Chung, and Andrew Zisserman. 2019. You said that?: Synthesising talking faces from audio. International Journal of Computer Vision 127, 11 (2019), 1767--1779.
[6]
KR Prajwal, Rudrabha Mukhopadhyay, Vinay P Namboodiri, and CV Jawahar. 2020. A lip sync expert is all you need for speech to lip generation in the wild. In Proceedings of the 28th ACM International Conference on Multimedia. 484--492.
[7]
Supasorn Suwajanakorn, Steven M. Seitz, and Ira Kemelmacher-Shlizerman. 2017. Synthesizing Obama: Learning Lip Sync from Audio. ACM Trans. Graph. 36, 4, Article 95 (jul 2017), 13 pages.
[8]
Justus Thies, Mohamed Elgharib, Ayush Tewari, Christian Theobalt, and Matthias Nießner. 2020. Neural voice puppetry: Audio-driven facial reenactment. In European conference on computer vision. Springer, 716--731.
[9]
Fei Yin, Yong Zhang, Xiaodong Cun, Mingdeng Cao, Yanbo Fan, Xuan Wang, Qingyan Bai, Baoyuan Wu, Jue Wang, and Yujiu Yang. 2022. StyleHEAT: One-Shot High-Resolution Editable Talking Face Generation via Pretrained StyleGAN. arXiv preprint arXiv:2203.04036 (2022).
[10]
Zhimeng Zhang, Lincheng Li, Yu Ding, and Changjie Fan. 2021. Flow-Guided One-Shot Talking Face Generation With a High-Resolution Audio-Visual Dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 3661--3670.

Cited By

View all
  • (2023)Emotionally Enhanced Talking Face GenerationProceedings of the 1st International Workshop on Multimedia Content Generation and Evaluation: New Methods and Practice10.1145/3607541.3616812(81-90)Online publication date: 29-Oct-2023

Index Terms

  1. Emotional Talking Faces: Making Videos More Expressive and Realistic

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MMAsia '22: Proceedings of the 4th ACM International Conference on Multimedia in Asia
      December 2022
      296 pages
      ISBN:9781450394789
      DOI:10.1145/3551626
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 13 December 2022

      Check for updates

      Author Tags

      1. emotion capture
      2. lip sync
      3. multimodal
      4. talking face generation

      Qualifiers

      • Demonstration

      Conference

      MMAsia '22
      Sponsor:
      MMAsia '22: ACM Multimedia Asia
      December 13 - 16, 2022
      Tokyo, Japan

      Acceptance Rates

      Overall Acceptance Rate 59 of 204 submissions, 29%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)19
      • Downloads (Last 6 weeks)3
      Reflects downloads up to 17 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)Emotionally Enhanced Talking Face GenerationProceedings of the 1st International Workshop on Multimedia Content Generation and Evaluation: New Methods and Practice10.1145/3607541.3616812(81-90)Online publication date: 29-Oct-2023

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media