Skip to main content

Towards Realistic Real Time Speech-Driven Facial Animation

  • Conference paper
Intelligent Virtual Agents (IVA 2008)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5208))

Included in the following conference series:

  • 3067 Accesses

Abstract

In this work we concentrate on finding correlation between speech signal and occurrence of facial gestures with the goal of creating believable virtual humans. We propose a method to implement facial gestures as a valuable part of human behavior and communication. Information needed for the generation of the facial gestures is extracted from speech prosody by analyzing natural speech in real time. This work is based on the previously developed HUGE architecture for statistically based facial gesturing, and extends our previous work on automatic real time lip sync.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Albrecht, I., Haber, J., Seidel, H.: Automatic Generation of Non-Verbal Facial Expressions from Speech. In: Proceedings of CGI (2002)

    Google Scholar 

  2. Malcangi, M., de Tintis, R.: Audio Based Real-Time Speech Animation of Embodied Conversational Agents. LNCS. Springer, Heidelberg (2004)

    Google Scholar 

  3. Zoric, G., Pandzic, I.: Real-Time Language Independent Lip Synchronization Method Using a Genetic Algorithm, special issue of Signal Processing Journal on Multimodal Human-Computer Interfaces (2006)

    Google Scholar 

  4. Smid, K., Zoric, G., Pandzic, I.P.: [HUGE]: Universal Architecture for Statistically Based HUman GEsturing. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 256–269. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Helmut Prendinger James Lester Mitsuru Ishizuka

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Cerekovic, A., Zoric, G., Smid, K., Pandzic, I.S. (2008). Towards Realistic Real Time Speech-Driven Facial Animation. In: Prendinger, H., Lester, J., Ishizuka, M. (eds) Intelligent Virtual Agents. IVA 2008. Lecture Notes in Computer Science(), vol 5208. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-85483-8_51

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-85483-8_51

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-85482-1

  • Online ISBN: 978-3-540-85483-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics