Skip to main content

[HUGE]: Universal Architecture for Statistically Based HUman GEsturing

  • Conference paper
Book cover Intelligent Virtual Agents (IVA 2006)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4133))

Included in the following conference series:

Abstract

We introduce a universal architecture for statistically based HUman GEsturing (HUGE) system, for producing and using statistical models for facial gestures based on any kind of inducement. As inducement we consider any kind of signal that occurs in parallel to the production of gestures in human behaviour and that may have a statistical correlation with the occurrence of gestures, e.g. text that is spoken, audio signal of speech, bio signals etc. The correlation between the inducement signal and the gestures is used to first build the statistical model of gestures based on a training corpus consisting of sequences of gestures and corresponding inducement data sequences. In the runtime phase, the raw, previously unknown inducement data is used to trigger (induce) the real time gestures of the agent based on the previously constructed statistical model. We present the general architecture and implementation issues of our system, and further clarify it through two case studies. We believe that this universal architecture is useful for experimenting with various kinds of potential inducement signals and their features and exploring the correlation of such signals or features with the gesturing behaviour.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Smid, K., Radman, V., Pandzic, I.: Automatic Content Production for an Autonomous Speaker Agent. In: Nakano, Y.I., Nishida, T. (eds.) Conversational Informatics for Supporting Social Intelligence and Interaction: Situational and Environmental Information Enforcing Involvement in Conversation, Hatfield: AISB, The Society for the Study of Artificial Intelligence and the Simulation of Behaviour, pp. 103–113 (2005)

    Google Scholar 

  2. Zoric, G., Smid, K., Pandzic, I.: Automatic facial gesturing for conversational agents and avatars. In: Tarumi, H., Li, Y., Yoshida, T. (eds.) Proceedings of the 2005 International Conference on Active Media Technology (AMT 2005), pp. 505–510. IEEE, Piscataway (2005)

    Chapter  Google Scholar 

  3. Albrecht, I., Haber, J., Seidel, H.: Automatic Generation of Non-Verbal Facial Expressions from Speech. In: Proc. Computer Graphics International 2002 (CGI 2002), pp. 283–293 (July 2002)

    Google Scholar 

  4. Poggi, I., Pelachaud, C.: Signals and meanings of gaze in Animated Faces. In: McKevitt, P., O’ Nuallàin, S., Mulvihill, C. (eds.) Language,Vision, and Music, pp. 133–144. John Benjamins, Amsterdam (2002)

    Google Scholar 

  5. Lee, S.P., Badler, J.B., Badler, N.I.: Eyes Alive. In: Proceedings of the 29th annual conference on Computer graphics and interactive techniques 2002, San Antonio, Texas, USA, pp. 637–644. ACM Press, New York (2002)

    Google Scholar 

  6. Cassell, J., Pelachaud, C., Badler, N., Steedman, M., Achorn, B., Becket, T., Douvillle, B., Prevost, S., Stone, M.: Animated Conversation: Rule-based Generation of Facial Expressions, Jesture & Spoken Intonation for Multiple Conversational Agents. In: Proceedings of SIGGAPH 1994 (1994)

    Google Scholar 

  7. Cassell, J., Vilhjálmsson, H., Bickmore, T.: BEAT: the Behavior Expression Animation Toolkit. In: Fiume, E. (ed.) Proceedings of SIGGRAPH 2001. Computer Graphics Proceedings. Annual Conference Series, pp. 477–486. ACM Press / ACM SIGGRAPH, New York (2001)

    Chapter  Google Scholar 

  8. Graf, H.P., Cosatto, E., Strom, V., Huang, F.J.: Visual Prosody: Facial Movements Accompanying Speech. In: Proceedings of AFGR 2002, pp. 381–386 (2002)

    Google Scholar 

  9. Cao, Y., Tien, W.C., Faloutsos, P., Pighin, F.: Expressive speech-driven facial animation. ACM Trans. Graph. 24(4), 1283–1302 (2005)

    Article  Google Scholar 

  10. Gutierrez-Osuna, R., Kakumanu, P., Esposito, A., Garcia, O.N., Bojorquez, A., Castillo, J., Rudomin, I.: Speech-driven Facial Animation with Realistic Dynamics. IEEE Trans. on Mutlimedia 7(1), 33–42 (2005)

    Article  Google Scholar 

  11. Granström, B., House, D.: Audiovisual representation of prosody in expressive speech communication. Speech Communication 46, 473–484 (2005)

    Article  Google Scholar 

  12. Brand, M.: Voice Puppetry. In: Proceedings of SIGGRAPH 1999 (1999)

    Google Scholar 

  13. Zorić, G., Pandžić, I.S.: A Real-time Lip Sync System Using a Genetic Algorithm for Automatic Neural Network Configuration. In: Proceedings of the IEEE International Conference on Multimedia & Expo ICME, Amsterdam, The Netherlands (July 2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Smid, K., Zoric, G., Pandzic, I.S. (2006). [HUGE]: Universal Architecture for Statistically Based HUman GEsturing. In: Gratch, J., Young, M., Aylett, R., Ballin, D., Olivier, P. (eds) Intelligent Virtual Agents. IVA 2006. Lecture Notes in Computer Science(), vol 4133. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11821830_21

Download citation

  • DOI: https://doi.org/10.1007/11821830_21

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-37593-7

  • Online ISBN: 978-3-540-37594-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics