Abstract
Embodied Conversational Agent (ECA) is the user interface metaphor that allows to naturally communicate information during human-computer interaction in synergic modality dimensions, including voice, gesture, emotion, text, etc. Due to its anthropological representation and the ability to express human-like behavior, ECAs are becoming popular interface front-ends for dialog and conversational applications. One important prerequisite for efficient authoring of such ECA-based applications is the existence of a suitable programming language that exploits the expressive possibilities of multimodally blended messages conveyed to the user. In this paper, we present an architecture and interaction language ECAF, which we used for authoring several ECA-based applications. We also provide the feedback from usability testing we carried for user acceptance of several multimodal blending strategies.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Cassell, J., Vilhjálmsson, H., Bickmore, T.: BEAT: the Behavior Expression Animation Toolkit. In: ACM SIGGRAPH. Computer Graphics Proceedings, Annual Conference Series, ACM Press, New York, Reprinted in this volume (2001)
Fleury, P., Cuřín, J., Kleindienst, J.: CHiLiX – Connecting Computers. CHIL Technology Catalogue, http://chil.server.de/servlet/is/8923/
Gedalia, P.: The Expression Toolkit An Open-Source Procedural Facial Animation System, http://expression.sf.net
HumanML. Human markup language, http://www.humanmarkup.org
IBM: Embedded ViaVoice Multiplatform Software Development Kit (2005)
Ishizuka, M., Tsutsui, T., Saeyor, S., Dohi, H., Zong, Y., Prendinger, H.: MPML: A multimodal presentation markup language with character agent control functions. In: Achieving Human-like Behavior in Interactive Animated Agents, Proceedings of the AA 2000, pp. 50–54. Workshop, Barcelona (2000)
Not, E., Balci, K., Pianesi, F., Zancanaro, M.: Synthetic Characters as Multichannel Interfaces. In: Proceedings of the ICMI 2005, Trento, Italy, pp. 200–207 (2005)
Perlin, K.: Image Synthesizer. Computer Graphics Journal 3, 287–296 (1985)
Piwek, P., Krenn, B., Schröder, M., Grice, M., Baumann, S., Pirker, H.: RRL: a rich representation language for the description of agents behaviour in NECA. In: Falcone, R., Barber, S., Korba, L., Singh, M.P. (eds.) AAMAS workshop 2002. LNCS (LNAI), vol. 2631, Springer, Heidelberg (2003)
Segal, M., Akeley, K.: The OpenGL Graphics System: A Specification (1993)
VHML. Virtual Human Markup Language, http://www.vhml.org
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kunc, L., Kleindienst, J. (2007). ECAF: Authoring Language for Embodied Conversational Agents. In: Matoušek, V., Mautner, P. (eds) Text, Speech and Dialogue. TSD 2007. Lecture Notes in Computer Science(), vol 4629. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74628-7_28
Download citation
DOI: https://doi.org/10.1007/978-3-540-74628-7_28
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-74627-0
Online ISBN: 978-3-540-74628-7
eBook Packages: Computer ScienceComputer Science (R0)