Skip to main content

Nonverbal Behavior Generator for Embodied Conversational Agents

  • Conference paper
Book cover Intelligent Virtual Agents (IVA 2006)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4133))

Included in the following conference series:

Abstract

Believable nonverbal behaviors for embodied conversational agents (ECA) can create a more immersive experience for users and improve the effectiveness of communication. This paper describes a nonverbal behavior generator that analyzes the syntactic and semantic structure of the surface text as well as the affective state of the ECA and annotates the surface text with appropriate nonverbal behaviors. A number of video clips of people conversing were analyzed to extract the nonverbal behavior generation rules. The system works in real-time and is user-extensible so that users can easily modify or extend the current behavior generation rules.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Knapp, M., Hall, J.: Nonverbal Communication in Human Interaction, 4th edn. Harcourt Brace College Publishers (1997)

    Google Scholar 

  2. Fabri, M., Moore, D., Hobbs, D.: Expressive agents: Non-verbal communication in collaborative virtual environments. In: Proceedings of Autonomous Agents and Multi-Agent Systems, Bologna, Italy (2002)

    Google Scholar 

  3. Swartout, W., Hill, R., Gratch, J., Johnson, W., Kyriakakis, C., Labore, K., Lindheim, R., Marsella, S., Miraglia, D., Moore, B., Morie, J., Rickel, J., Thiebaux, M., Tuch, L., Whitney, R.: Toward the holodeck: Integrating graphics, sound, character and story. In: Proceedings of 5th International Conference on Autonomous Agents, Montreal, Canada (2001)

    Google Scholar 

  4. Durlach, N., Slater, M.: Presence in shared virtual environments and virtual togetherness. In: BT Workshop on Presence in Shared Virtual Environments, Ipswich, UK (1998)

    Google Scholar 

  5. Cassell, J., Vilhjálmsson, H., Chang, K., Bickmore, T., Campbell, L., Yan, H.: Requirements for an architecture for embodied conversational characters. In: Magnenat-Thalmann, N., Thalmann, D. (eds.) Computer Animation and Simulation 1999, pp. 109–120. Springer, Vinna (1999)

    Google Scholar 

  6. Becheiraz, P., Thalmann, D.: A behavioral animation system for autonomous actors personified by emotions. In: Proceedings of the 1st Workshop on Embodied Conversational Characters (WECC), Lake Tahoe, CA, pp. 57–65 (1998)

    Google Scholar 

  7. Striegnitz, K., Tepper, P., Lovett, A., Cassell, J.: Knowledge representation for generating locating gestures in route directions. In: Proceedings of Workshop on Spatial Language and Dialogue, Delmenhorst, Germany (2005)

    Google Scholar 

  8. Cassell, J., Vilhjálmsson, H., Bickmore, T.: BEAT: The behavior expression animation toolkit. In: Proceedings of ACM SIGGRAPH, pp. 477–486. ACM Press / ACM SIGGRAPH, New York (2001)

    Google Scholar 

  9. Vilhjálmsson, H., Marsella, S.: Social performance framework. In: Workshop on Modular Construction of Human-Like Intelligence at the AAAI 20th National Conference on Artificual Intelligence, Pittsburgh, PA (2005)

    Google Scholar 

  10. Ekman, P.: About brows: emotional and conversational signals. In: von Cranach, M., Foppa, K., Lepenies, W., Ploog, D. (eds.) Human Ethology, pp. 169–248. Cambridge University Press, Cambridge (1979)

    Google Scholar 

  11. Hadar, U., Steiner, T., Grant, E., Clifford Rose, F.: Kinematics of head movement accompanying speech during conversation. Human Movement Science 2, 35–46 (1983)

    Article  Google Scholar 

  12. Heylen, D.: Challenges ahead. In: Proceedings of AISB Symposium on Social Virtual Agents (in press)

    Google Scholar 

  13. Kendon, A.: Some uses of head shake. Gesture (2), 147–182 (2003)

    Google Scholar 

  14. McClave, E.: Linguistic functions of head movements in the context of speech. Journal of Pragmatics (32), 855–878 (2000)

    Google Scholar 

  15. The HUMAINE Consortium: The HUMAINE portal (2006) (Retrieved April 7, 2006), http://emotion-research.net/

  16. The HUMAINE Consortium: Multimodal data in action and interaction: a library of recordings and labelling schemes (2004) (Retrieved April 14, 2006), http://emotion-research.net/deliverables/

  17. Weizenbaum, J.: ELIZA – a computer program for the study of natural language communication between man and machines. Communications of the Association for Computing Machinery 9, 36–45 (1996)

    Google Scholar 

  18. n.a.: Behavior markup language (BML) specification (2006) (Retrieved June 6, 2006), http://twiki.isi.edu/Public/BMLSpecification/

  19. Kopp, S., Krenn, B., Marsella, S., Marshall, A., Pelachaud, C., Pirker, H., Thorisson, K., Vilhjálmsson, H.: Towards a common framework for multimodal generation in embodied conversation agents: a behavior markup language. In: International Conference on Virtual Agents, Marina del Rey, CA (submitted, 2006)

    Google Scholar 

  20. DeCarolis, B., Pelachaud, C., Poggi, I., Steedman, M.: APML, a mark-up language for believable behavior generation. In: Prendinger, H., Ishizuka, M. (eds.) Life-like Characters. Tools, Affective Functions and Applications, pp. 65–85. Springer, Heidelberg (2004)

    Google Scholar 

  21. Kopp, S., Wach Smuth, I.: Synthesizing multimodal utterances for conversational agents. Computer Animation and Virtual Worlds 15(1), 39–52 (2004)

    Article  Google Scholar 

  22. Chariank, E.: A maximum-entropy-inspired parser. In: Proceedings of North American Chapter of the Association for Computational Linguistics (2000)

    Google Scholar 

  23. Kallmann, M., Marsella, S.C.: Hierarchical motion controllers for real-time autonomous virtual humans. In: Panayiotopoulos, T., Gratch, J., Aylett, R.S., Ballin, D., Olivier, P., Rist, T. (eds.) IVA 2005. LNCS (LNAI), vol. 3661, pp. 253–265. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Lee, J., Marsella, S. (2006). Nonverbal Behavior Generator for Embodied Conversational Agents. In: Gratch, J., Young, M., Aylett, R., Ballin, D., Olivier, P. (eds) Intelligent Virtual Agents. IVA 2006. Lecture Notes in Computer Science(), vol 4133. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11821830_20

Download citation

  • DOI: https://doi.org/10.1007/11821830_20

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-37593-7

  • Online ISBN: 978-3-540-37594-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics