Abstract
The paper describes an experimental presentation system that can automatically generate dynamic ECA-based presentations from structured data including text context, images, music and sounds, videos, etc. Thus the Embodied Conversational Agent acts as a moderator in the chosen presentation context, typically personal diaries. Since an ECA represents a rich channel for conveying both verbal and non-verbal messages, we are researching ECAs as facilitators that transpose “dry” data such as diaries and blogs into more lively and dynamic presentations based on ontologies. We constructed our framework on an existing toolkit ECAF that supports runtime generation of ECA agents. We describe the extensions of the toolkit and give an overview of the current system architecture. We describe the particular Grandma TV scenario, where a family uses the ECA automatic presentation engine to deliver weekly family news to distant grandparents. Recently conducted usability studies suggest the pros and cons of the presented approach.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Bates, J.: The Role of Emotion in Believable Agents. In: Communications of ACM 37-7, pp. 122–125. ACM Press, New York (1994)
Cassell, J., Bickmore, T.W., Billinghurst, M., Campbell, L., Chang, K., Vilhjalmsson, V.V., Yan, H.: Embodiment in Conversational Interfaces: Rea. In: Proceedings of CHI 2009, pp. 520–527. ACM Press, New York (1999)
Corradini, A., Mehta, M., Bernsen, N., Charfuelan, M.: Animating an interactive conversational character for an educational game system. In: 10th Int. Conf. on Intelligent User Interfaces, San Diego, California, pp. 183–190 (2005)
Koray, B., Not, E., Zancanaro, M., Pianesi, F.: Xface Open Source Project and SMIL-Agent Scripting Language for Creating and Animating Embodied Conversational Agents. In: 15th Int. Conf. on Multimedia 2007, pp. 1013–1016. ACM Press, New York (2007)
Kunc, L., Kleindienst, J.: ECAF: Authoring Language for Embodied Conversational Agents. In: 10th Int. Conf. on Text, Speech and Dialogue, pp. 206–213. Springer, Heidelberg (2007)
Parke, F.I.: Computer Generated Animation of Faces. In: ACM annual conference 1972. ACM Press, New York (1972)
Pesce, M.D.: Programming Microsoft® DirectShow® for Digital Video and Television. Microsoft Press (2003)
Thalmann, N.M., Thalmann, D.: Digital actors for interactive television. IEEE Special Issue on Digital Television, Part 2 83(7), 1022–1031 (1995)
Waters, K.: A Muscle Model for Animating Three-Dimensional Facial Expression. In: Computer Graphics, SIGGRAPH 1987, vol. 21, pp. 17–24. ACM Press, New York (1987)
Really Simple Syndication 2.0 specification, http://www.rssboard.org/rss-specification
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kunc, L., Kleindienst, J., Slavík, P. (2008). Talking Head as Life Blog. In: Sojka, P., Horák, A., Kopeček, I., Pala, K. (eds) Text, Speech and Dialogue. TSD 2008. Lecture Notes in Computer Science(), vol 5246. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87391-4_47
Download citation
DOI: https://doi.org/10.1007/978-3-540-87391-4_47
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-87390-7
Online ISBN: 978-3-540-87391-4
eBook Packages: Computer ScienceComputer Science (R0)