Abstract
This paper presents an expressive gesture model that generates communicative gestures accompanying speech for the humanoid robot Nao. The research work focuses mainly on the expressivity of robot gestures being coordinated with speech. To reach this objective, we have extended and developed our existing virtual agent platform GRETA to be adapted to the robot. Gestural prototypes are described symbolically and stored in a gestural database, called lexicon. Given a set of intentions and emotional states to communicate the system selects from the robot lexicon corresponding gestures. After that the selected gestures are planned to synchronize speech and then instantiated in robot joint values while taking into account parameters of gestural expressivity such as temporal extension, spatial extension, fluidity, power and repetitivity. In this paper, we will provide a detailed overview of our proposed model.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Pelachaud, C., Gelin, R., Martin, J.-C., Le, Q.A.: Expressive Gestures Displayed by a Humanoid Robot during a Storytelling Application. In: New Frontiers in Human-Robot Interaction (AISB), Leicester, GB (2010)
Chi, D., Costa, M., Zhao, L., Badler, N.: The EMOTE model for effort and shape. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 173–182 (2000)
Dautenhahn, K., Nehaniv, C., Walters, M., Robins, B., Kose-Bagci, H., Mirza, N., Blow, M.: Kaspar–a minimally expressive humanoid robot for human–robot interaction research. Applied Bionics and Biomechanics 6(3), 369–397 (2009)
Gouaillier, D., Hugel, V., Blazevic, P., Kilner, C., Monceaux, J., Lafourcade, P., Marnier, B., Serre, J., Maisonnier, B.: Mechatronic design of NAO humanoid. In: Robotics and Automation, ICRA 2009, pp. 769–774. IEEE, Los Alamitos (2009)
Hartmann, B., Mancini, M., Pelachaud, C.: Towards affective agent action: Modelling expressive ECA gestures. In: International Conference on Intelligent User Interfaces-Workshop on Affective Interaction, San Diego, CA (2005)
Hartmann, B., Mancini, M., Pelachaud, C.: Implementing expressive gesture synthesis for embodied conversational agents. In: Gibet, S., Courty, N., Kamp, J.-F. (eds.) GW 2005. LNCS (LNAI), vol. 3881, pp. 188–199. Springer, Heidelberg (2006)
Heloir, A., Kipp, M.: EMBR – A realtime animation engine for interactive embodied agents. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds.) IVA 2009. LNCS, vol. 5773, pp. 393–404. Springer, Heidelberg (2009)
Hiraiwa, A., Hayashi, K., Manabe, H., Sugimura, T.: Life size humanoid robot that reproduces gestures as a communication terminal: appearance considerations. In: Computational Intelligence in Robotics and Automation, vol. 1, pp. 207–210. IEEE, Los Alamitos (2003)
Iverson, J., Goldin-Meadow, S.: Why people gesture when they speak. Nature 396(6708), 228–228 (1998)
Kendon, A.: Gesture: Visible action as utterance. Cambridge University Press, Cambridge (2004)
Kopp, S., Krenn, B., Marsella, S., Marshall, A., Pelachaud, C., Pirker, H., Thórisson, K., Vilhjálmsson, H.: Towards a common framework for multimodal generation: The behavior markup language. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 205–217. Springer, Heidelberg (2006)
Krauss, R.: Why do we gesture when we speak? Current Directions in Psychological Science 7(2), 54–60 (1998)
Lee, J., Toscano, R., Stiehl, W., Breazeal, C.: The design of a semi-autonomous robot avatar for family communication and education. In: Robot and Human Interactive Communication (ROMAN 2008), pp. 166–173. IEEE, Los Alamitos (2008)
Martin, J.C.: The contact video corpus (2009)
McNeill, D.: Hand and mind: What gestures reveal about thought. University of Chicago Press, Chicago (1992)
Neff, M., Fiume, E.: Methods for exploring expressive stance. Graphical Models 68(2), 133–157 (2006)
Niewiadomski, R., Bevacqua, E., Le, Q., Obaid, M., Looser, J., Pelachaud, C.: Cross-media agent platform. In: 16th International Conference on 3D Web Technology (2011)
Nozawa, Y., Dohi, H., Iba, H., Ishizuka, M.: Humanoid robot presentation controlled by multimodal presentation markup language mpml. In: Robot and Human Interactive Communication (ROMAN 2004), pp. 153–158. IEEE, Los Alamitos (2004)
Prillwitz, S.: HamNoSys Version 2.0: Hamburg notation system for sign languages: An introductory guide. Signum (1989)
Salem, M., Kopp, S., Wachsmuth, I., Joublin, F.: Generating robot gesture using a virtual agent framework. In: Intelligent Robots and Systems (IROS 2010), pp. 3592–3597. IEEE, Los Alamitos (2010)
Wallbott, H.: Bodily expression of emotion. European Journal of Social Psychology 28(6), 879–896 (1998)
Xing, S., Chen, I.: Design expressive behaviors for robotic puppet. In: Control, Automation, Robotics and Vision (ICARCV 2002), vol. 1, pp. 378–383. IEEE, Los Alamitos (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Quoc Anh, L., Pelachaud, C. (2011). Expressive Gesture Model for Humanoid Robot. In: D’Mello, S., Graesser, A., Schuller, B., Martin, JC. (eds) Affective Computing and Intelligent Interaction. ACII 2011. Lecture Notes in Computer Science, vol 6975. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24571-8_24
Download citation
DOI: https://doi.org/10.1007/978-3-642-24571-8_24
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-24570-1
Online ISBN: 978-3-642-24571-8
eBook Packages: Computer ScienceComputer Science (R0)