Abstract
This paper presents a system capable of automatically adding gestures to an embodied virtual character processing information from a simple text input. Gestures are generated based on the analysis of linguistic and contextual information of the input text. The system is embedded in the virtual world called second life and consists of an in world object and an off world server component that handles the analysis. Either a user controlled avatar or a non user controlled character can be used to display the gestures, that are timed with speech output from an Text-to-Speech system, and so show non verbal behavior without pushing the user to manually select it.
Chapter PDF
Similar content being viewed by others
Keywords
References
Breitfuss, W., Prendinger, H., Ishizuka, M.: Automatic generation of gaze and gestures for dialogues between embodied conversational agents. Int’l J. of Semantic Computing 2(1), 71–90 (2008)
Cassell, J., Vilhjálmsson, H., Bickmore, T.: BEAT: the Behavior Expression Animation Toolkit. In: Proceedings of SIGGRAPH 2001, pp. 477–486 (2001)
E.V.A. - Essential Voicechat Advancement by Jarek Dejavu (24.02.2009), http://www.shambles.net/pages/learning/ict/sltools/
Gratch, J., Wang, N., Gerten, J., Fast, E., Duffy, R.: Creating Rapport with Virtual Agents. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds.) IVA 2007. LNCS, vol. 4722, pp. 125–138. Springer, Heidelberg (2007)
Heylen, D.: Head gestures, gaze and the principles of conversational structure. International Journal of Humanoid Robotics 3(3), 241–226 (2006)
Kendon, A.: Some functions of gaze-direction in social interaction. Acta Psychologica 26, 22–63 (1967)
Kipp, M.: Creativity meets automation: Combining nonverbal action authoring with rules and machine learning. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS, vol. 4133, pp. 230–242. Springer, Heidelberg (2006)
Kopp, S., Tepper, P., Cassell, J.: Towards integrated microplanning of language and iconic gesture for multimodal output. In: Proceedings of the Int. Conf. on Multimodal Interfaces 2004, pp. 97–104. ACM Press, New York (2004)
Peters, C., Pelachaud, C., Bevacqua, E., Mancini, M.: A model of attention and interest using gaze behavior. In: Proceedings of 5th International Conference on Intelligent Virtual Agents 2005, pp. 229–240 (2005)
Reeves, B., Nass, C.: The media equation: How people treat computers, television and new media like real people and places. CLSI Publications, Stanford (1996)
Ullrich, S., Bruegmann, K., Prendinger, H., Ishizuka, M.: Extending MPML3D to Second Life. In: Prendinger, H., Lester, J.C., Ishizuka, M. (eds.) IVA 2008. LNCS (LNAI), vol. 5208, pp. 281–288. Springer, Heidelberg (2008)
Vertegaal, R., Weevers, I., Sohn, C., Cheung, C.: Gaze-2: conveying eye contact in Group video conferencing using eye-controlled camera direction. In: Proceedings of the SIGCHI Conference on Human factors in Computing Systems (CHI 2003), pp. 521–528. ACM Press, New York (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Breitfuss, W., Prendinger, H., Ishizuka, M. (2009). Automatic Generation of Non-verbal Behavior for Agents in Virtual Worlds: A System for Supporting Multimodal Conversations of Bots and Avatars. In: Ozok, A.A., Zaphiris, P. (eds) Online Communities and Social Computing. OCSC 2009. Lecture Notes in Computer Science, vol 5621. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02774-1_17
Download citation
DOI: https://doi.org/10.1007/978-3-642-02774-1_17
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-02773-4
Online ISBN: 978-3-642-02774-1
eBook Packages: Computer ScienceComputer Science (R0)