Skip to main content

Expressive Gesture Model for Humanoid Robot

  • Conference paper
Affective Computing and Intelligent Interaction (ACII 2011)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 6975))

Abstract

This paper presents an expressive gesture model that generates communicative gestures accompanying speech for the humanoid robot Nao. The research work focuses mainly on the expressivity of robot gestures being coordinated with speech. To reach this objective, we have extended and developed our existing virtual agent platform GRETA to be adapted to the robot. Gestural prototypes are described symbolically and stored in a gestural database, called lexicon. Given a set of intentions and emotional states to communicate the system selects from the robot lexicon corresponding gestures. After that the selected gestures are planned to synchronize speech and then instantiated in robot joint values while taking into account parameters of gestural expressivity such as temporal extension, spatial extension, fluidity, power and repetitivity. In this paper, we will provide a detailed overview of our proposed model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Pelachaud, C., Gelin, R., Martin, J.-C., Le, Q.A.: Expressive Gestures Displayed by a Humanoid Robot during a Storytelling Application. In: New Frontiers in Human-Robot Interaction (AISB), Leicester, GB (2010)

    Google Scholar 

  2. Chi, D., Costa, M., Zhao, L., Badler, N.: The EMOTE model for effort and shape. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 173–182 (2000)

    Google Scholar 

  3. Dautenhahn, K., Nehaniv, C., Walters, M., Robins, B., Kose-Bagci, H., Mirza, N., Blow, M.: Kaspar–a minimally expressive humanoid robot for human–robot interaction research. Applied Bionics and Biomechanics 6(3), 369–397 (2009)

    Article  Google Scholar 

  4. Gouaillier, D., Hugel, V., Blazevic, P., Kilner, C., Monceaux, J., Lafourcade, P., Marnier, B., Serre, J., Maisonnier, B.: Mechatronic design of NAO humanoid. In: Robotics and Automation, ICRA 2009, pp. 769–774. IEEE, Los Alamitos (2009)

    Chapter  Google Scholar 

  5. Hartmann, B., Mancini, M., Pelachaud, C.: Towards affective agent action: Modelling expressive ECA gestures. In: International Conference on Intelligent User Interfaces-Workshop on Affective Interaction, San Diego, CA (2005)

    Google Scholar 

  6. Hartmann, B., Mancini, M., Pelachaud, C.: Implementing expressive gesture synthesis for embodied conversational agents. In: Gibet, S., Courty, N., Kamp, J.-F. (eds.) GW 2005. LNCS (LNAI), vol. 3881, pp. 188–199. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  7. Heloir, A., Kipp, M.: EMBR – A realtime animation engine for interactive embodied agents. In: Ruttkay, Z., Kipp, M., Nijholt, A., Vilhjálmsson, H.H. (eds.) IVA 2009. LNCS, vol. 5773, pp. 393–404. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  8. Hiraiwa, A., Hayashi, K., Manabe, H., Sugimura, T.: Life size humanoid robot that reproduces gestures as a communication terminal: appearance considerations. In: Computational Intelligence in Robotics and Automation, vol. 1, pp. 207–210. IEEE, Los Alamitos (2003)

    Google Scholar 

  9. Iverson, J., Goldin-Meadow, S.: Why people gesture when they speak. Nature 396(6708), 228–228 (1998)

    Article  Google Scholar 

  10. Kendon, A.: Gesture: Visible action as utterance. Cambridge University Press, Cambridge (2004)

    Book  Google Scholar 

  11. Kopp, S., Krenn, B., Marsella, S., Marshall, A., Pelachaud, C., Pirker, H., Thórisson, K., Vilhjálmsson, H.: Towards a common framework for multimodal generation: The behavior markup language. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 205–217. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  12. Krauss, R.: Why do we gesture when we speak? Current Directions in Psychological Science 7(2), 54–60 (1998)

    Article  Google Scholar 

  13. Lee, J., Toscano, R., Stiehl, W., Breazeal, C.: The design of a semi-autonomous robot avatar for family communication and education. In: Robot and Human Interactive Communication (ROMAN 2008), pp. 166–173. IEEE, Los Alamitos (2008)

    Google Scholar 

  14. Martin, J.C.: The contact video corpus (2009)

    Google Scholar 

  15. McNeill, D.: Hand and mind: What gestures reveal about thought. University of Chicago Press, Chicago (1992)

    Google Scholar 

  16. Neff, M., Fiume, E.: Methods for exploring expressive stance. Graphical Models 68(2), 133–157 (2006)

    Article  Google Scholar 

  17. Niewiadomski, R., Bevacqua, E., Le, Q., Obaid, M., Looser, J., Pelachaud, C.: Cross-media agent platform. In: 16th International Conference on 3D Web Technology (2011)

    Google Scholar 

  18. Nozawa, Y., Dohi, H., Iba, H., Ishizuka, M.: Humanoid robot presentation controlled by multimodal presentation markup language mpml. In: Robot and Human Interactive Communication (ROMAN 2004), pp. 153–158. IEEE, Los Alamitos (2004)

    Google Scholar 

  19. Prillwitz, S.: HamNoSys Version 2.0: Hamburg notation system for sign languages: An introductory guide. Signum (1989)

    Google Scholar 

  20. Salem, M., Kopp, S., Wachsmuth, I., Joublin, F.: Generating robot gesture using a virtual agent framework. In: Intelligent Robots and Systems (IROS 2010), pp. 3592–3597. IEEE, Los Alamitos (2010)

    Google Scholar 

  21. Wallbott, H.: Bodily expression of emotion. European Journal of Social Psychology 28(6), 879–896 (1998)

    Article  Google Scholar 

  22. Xing, S., Chen, I.: Design expressive behaviors for robotic puppet. In: Control, Automation, Robotics and Vision (ICARCV 2002), vol. 1, pp. 378–383. IEEE, Los Alamitos (2002)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Quoc Anh, L., Pelachaud, C. (2011). Expressive Gesture Model for Humanoid Robot. In: D’Mello, S., Graesser, A., Schuller, B., Martin, JC. (eds) Affective Computing and Intelligent Interaction. ACII 2011. Lecture Notes in Computer Science, vol 6975. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24571-8_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24571-8_24

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24570-1

  • Online ISBN: 978-3-642-24571-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics