Skip to main content

Toward a Universal Platform for Integrating Embodied Conversational Agent Components

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4252))

Abstract

Embodied Conversational Agents (ECAs) are computer generated life-like characters that interact with human users in face-to-face conversations. To achieve natural multi-modal conversations, ECA systems are very sophisticated and require many building assemblies and thus are difficult for individual research groups to develop. This paper proposes a generic architecture, the Universal ECA Framework, which is currently under development and includes a blackboard-based platform, a high-level protocol to integrate general purpose ECA components and ease ECA system prototyping.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Artificial Intelligence Markup Language (AIML), http://www.alicebot.org/

  2. Cassell, J., Vilhjalmsson, H., Bickmore, T.: BEAT: the Behavior Expression Animation Tool-kit. In: The Proceedings of SIGGRAPH 2001, pp. 477–486 (2001)

    Google Scholar 

  3. The eNTERFACE 2006 workshop on multimodal interfaces, http://enterface.tel.fer.hr

  4. Gratch, J., Rickel, J., Andre, E., Cassell, J., Petajan, E., Badler, N.: Creating Interactive Virtual Humans: Some Assembly Required. IEEE Intelligent Systems, 54–63 (2002)

    Google Scholar 

  5. Nakano, Y., Okamoto, M., Kawahara, D., Li, Q., Nishida, T.: Converting Text into Agent An-imations: Assigning Gestures to Text. In: The Proceedings of The Human Language Tech-nology Conference (HLT-NAACL 2004) (2004)

    Google Scholar 

  6. Thorisson, K., List, T., Pennock, C., DiPirro, J.: Whiteboards: Scheduling Blackboards for Semantic Routing of Messages & Streams. In: AAAI 2005 Workshop on Modular Construction of Human-Like Intelligence (2005)

    Google Scholar 

  7. visage|SDK, visage technologies, http://www.visagetechnologies.com/index.html

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Huang, HH. et al. (2006). Toward a Universal Platform for Integrating Embodied Conversational Agent Components. In: Gabrys, B., Howlett, R.J., Jain, L.C. (eds) Knowledge-Based Intelligent Information and Engineering Systems. KES 2006. Lecture Notes in Computer Science(), vol 4252. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11893004_28

Download citation

  • DOI: https://doi.org/10.1007/11893004_28

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-46537-9

  • Online ISBN: 978-3-540-46539-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics