Abstract
In this paper we present an agent that can analyse certain human full-body movements in order to respond in an expressive manner with copying behaviour. Our work focuses on the analysis of human full-body movement for animating a virtual agent, called Greta, able to perceive and interpret users’ expressivity and to respond properly. Our system takes in input video data related to a dancer moving in the space. Analysis of video data and automatic extraction of motion cues is done in EyesWeb. We consider the amplitude and speed of movement. Then, to generate the animation for our agent, we need to map the motion cues on the corresponding expressivity parameters of the agent. We also present a behaviour markup language for virtual agents to define the values of expressivity parameters on gestures.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Ball, G., Breese, J.: Emotion and personality in a conversational agent. In: Embodied conversational agents, pp. 189–219. MIT Press, Cambridge (2000)
Byun, M., Badler, N.: Facemote: Qualitative parametric modifiers for facial animations. In: Symposium on Computer Animation, San Antonio, TX (July 2002)
Boone, R.T., Cunningham, J.G.: Children’s decoding of emotion in expressive body movement: the development of cue attunement. Developmental Psychology 34, 1007–1016 (1998)
Balomenos, T., et al.: Emotion analysis in man-machine interaction systems. In: Bengio, S., Bourlard, H. (eds.) MLMI 2004. LNCS, vol. 3361, pp. 318–328. Springer, Heidelberg (2005)
Bevacqua, E., et al.: Multimodal sensing, interpretation and copying of movements by a virtual agent. In: André, E., et al. (eds.) PIT 2006. LNCS (LNAI), vol. 4021, Springer, Heidelberg (2006)
Castellano, G.: Human full-body movement and gesture analysis for emotion recognition: a dynamic approach. Paper presented at HUMAINE Crosscurrents meeting, Athens (June 2006)
Camurri, A., et al.: Toward real-time multimodal processing: Eyesweb 4.0. In: Proc. AISB 2004 Convention: Motion, Emotion and Cognition, Leeds, UK (2004)
Camurri, A., et al.: Subject interfaces: measuring bodily activation during an emotional experience of music. In: Gibet, S., Courty, N., Kamp, J.-F. (eds.) GW 2005. LNCS (LNAI), vol. 3881, pp. 268–279. Springer, Heidelberg (2006)
Chi, D., et al.: The emote model for effort and shape. In: ACM SIGGRAPH ’00, New Orleans, LA, July 2000, pp. 173–182. ACM Press, New York (2000)
Camurri, A., Lagerlöf, I., Volpe, G.: Recognizing emotion from dance movement: Comparison of spectator recognition and automated techniques. International Journal of Human-Computer Studies 59, 213–225 (2003)
Camurri, A., Mazzarino, B., Volpe, G.: Analysis of expressive gesture: The eyesweb expressive gesture processing library. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS (LNAI), vol. 2915, Springer, Heidelberg (2004)
Castellano, G., Peters, C.: Full-body analysis of real and virtual human motion for animating expressive agents. Paper presented at the HUMAINE Crosscurrents Meeting, Athens (June 2006)
Drosopoulos, A., et al.: Emotionally-rich man-machine interaction based on gesture analysis. In: Human-Computer Interaction International, vol. 4, Crete, Greece, June 2003, pp. 1372–1376 (2003)
DeMeijer, M.: The contribution of general features of body movement to the attribution of emotions. Journal of Nonverbal Behavior 28, 247–268 (1989)
Gallaher, P.E.: Individual differences in nonverbal behavior: Dimensions of style. Journal of Personality and Social Psychology 63(1), 133–145 (1992)
Hartmann, B., Mancini, M., Pelachaud, C.: Towards affective agent action: Modelling expressive ECA gestures. In: Proceedings of the IUI Workshop on Affective Interaction, San Diego, CA (January 2005)
Kopp, S., Sowa, T., Wachsmuth, I.: Imitation games with an artificial agent: From mimicking to understanding shape-related iconic gestures. In: Gesture Workshop, pp. 436–447 (2003)
Mancini, M., Bresin, R., Pelachaud, C.: From acoustic cues to an expressive agent. In: Gesture Workshop, pp. 280–291 (2005)
McNeill, D.: Hand and mind - what gestures reveal about thought. The University of Chicago Press, Chicago (1992)
Pelachaud, C., Bilvi, M.: Computational model of believable conversational agents. In: Huget, M.-P. (ed.) Communication in Multiagent Systems. LNCS (LNAI), vol. 2650, pp. 300–317. Springer, Heidelberg (2003)
Paiva, A., et al.: Sentoy: a tangible interface to control the emotions of a synthetic character. In: AAMAS ’03: Proceedings of the second international joint conference on Autonomous agents and multiagent systems, Melbourne, Australia, pp. 1088–1089. ACM Press, New York (2003)
Peters, C.: Direction of attention perception for conversation initiation in virtual environments. In: International Working Conference on Intelligent Virtual Agents, Kos, Greece, September 2005, pp. 215–228 (2005)
Picard, R.: Affective computing. MIT Press, Boston (1997)
Pollick, F.E.: The features people use to recognize human movement style. In: Camurri, A., Volpe, G. (eds.) GW 2003. LNCS (LNAI), vol. 2915, pp. 20–39. Springer, Heidelberg (2004)
Reeves, B., Nass, C.: The media equation: How people treat computers, television and new media like real people and places. CSLI Publications, Stanford (1996)
Scherer, K.R., Wallbott, H.G.: Analysis of nonverbal behavior. In: Handbook of Discourse: Analysis, vol. 2, Academic Press, London (1985)
Taylor, R., Torres, D., Boulanger, P.: Using music to interact with a virtual character. In: The 2005 International Conference on New Interfaces for Musical Expression (2005)
Wallbott, H.G.: Bodily expression of emotion. European Journal of Social Psychology 13, 879–896 (1998)
Wallbott, H.G., Scherer, K.R.: Cues and channels in emotion recognition. Journal of Personality and Social Psychology 51(4), 690–699 (1986)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer Berlin Heidelberg
About this paper
Cite this paper
Mancini, M., Castellano, G., Bevacqua, E., Peters, C. (2007). Copying Behaviour of Expressive Motion. In: Gagalowicz, A., Philips, W. (eds) Computer Vision/Computer Graphics Collaboration Techniques. MIRAGE 2007. Lecture Notes in Computer Science, vol 4418. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-71457-6_17
Download citation
DOI: https://doi.org/10.1007/978-3-540-71457-6_17
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-71456-9
Online ISBN: 978-3-540-71457-6
eBook Packages: Computer ScienceComputer Science (R0)