Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 6800))

Abstract

Multimodal interfaces supporting ECAs enable the development of novel concepts regarding human-machine interaction interfaces and provide several communication channels such as: natural speech, facial expression, and different body gestures. This paper presents the synthesis of expressive behaviour within the realm of affective computing. By providing descriptions of different expressive parameters (e.g. temporal, spatial, power, and different degrees of fluidity) and the context of unplanned behaviour, it addresses the synthesis of expressive behaviour by enabling the ECA to visualize complex human-like body movements (e.g. expressions, emotional speech, hand and head gestures, gaze and complex emotions). Movements performed by our ECA EVA are reactive, not require extensive planning phases, and can be presented hieratically as a set of different events. The animation concepts prevent the synthesis of unnatural movements even when two or more behavioural events influence the same segments of the body (e.g. speech with different facial expressions).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Georgantas, G., Issarny, V., Cerisara, C.: Dynamic Synthesis of Natural Human-Machine Interfaces in Ambient Intelligence Environments. In: Ambient Intelligence, Wireless Networking, and Ubiquitous Computing. Artech House, Boston (2006)

    Google Scholar 

  2. Sato, E., Yamaguchi, T., Harashima, F.: Natural Interface Using Pointing Behavior for Human–Robot Gestural Interaction. Industrial Electronics 54(2), 1105–1112 (2007)

    Article  Google Scholar 

  3. Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollia, S., Fellenz, W., Taylor, J.G.: Emotion recognition in human-computer interaction. IEEE Signal Processing Magazine 18(1), 32–80 (2001)

    Article  Google Scholar 

  4. Daconta, M.C., Obrst, L.J., Smith, K.T.: The Semantic Web: A Guide to the Future of XML, Web Services, and Knowledge Management. Wiley, Chichester (2003)

    Google Scholar 

  5. Schoop, M., de Moor, A., Dietz, J.L.G.: The pragmatic web: a manifesto. Commun. ACM 49(5), 75–76 (2006)

    Article  Google Scholar 

  6. Cosatto, E., Graf, H.: Sample-Based Synthesis of Photo-Realistic Talking Heads. In: Proceedings of the Computer Animation, p. 103 (1998)

    Google Scholar 

  7. Poggi, I., Pelachaud, C., De Rosis, F., Carofiglio, V., De Carolis, B.: Greta, a believable embodied conversational agent. In: Multimodal Intelligent Information Presentation Text, Speech and Language Technology, vol. 27 (2005)

    Google Scholar 

  8. Baldassarri, S., Cerezo, E., Seron, F.J.: Chaos and Graphics: Maxine: A platform for embodied animated agents. Computers and Graphics 32(4), 430–437 (2008)

    Article  Google Scholar 

  9. Chuang, E., Bregler, C.: Mood swings: expressive speech animation. ACM Transactions on Graphics (TOG) 24(2), 331–347 (2005)

    Article  Google Scholar 

  10. Abrilian, S., Devillers, L., Buisine, S., Martin, J.C.: EmoTV1: Annotation of Real-life Emotions for the Specification of Multimodal Affective Interfaces. HCI International (2005)

    Google Scholar 

  11. Malatesta, L., Raouzaiou, A., Karpouzis, K., Kollias, S.: Towards modeling embodied conversational agent character profiles using appraisal theory predictions in expression synthesis. Applied Intelligence 30(1), 58–64 (2009)

    Article  Google Scholar 

  12. Zoric, G., Pandzic, I.S.: Towards Real-time Speech-based Facial Animation Applications built on HUGE architecture. In: Proceedings of International Conference on Auditory-Visual Speech Processing AVSP (2008)

    Google Scholar 

  13. Smid, K., Zoric, G., Pandzic, I.S.: HUGE: Universal Architecture for Statistically Based HUman Gesturing. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 256–269. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  14. Bevacqua, E., Mancini, M., Niewiadomski, R., Pelachaud, C.: An expressive ECA showing complex emotions. In: Proceedings of the AISB Annual Convention (2007)

    Google Scholar 

  15. DeCarolis, B., Pelachaud, C., Poggi, I., Steedman, M.: APML, a mark-up language for believable behavior generation. In: Prendinger, H., Ishizuka, M. (eds.) Life-like Characters. Tools, Affective Functions and Applications, pp. 65–85. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  16. Kipp, M., Heloir, A., Gebhard, P., Schroeder, M.: Realizing Multimodal Behavior: Closing the gap between behavior planning and embodied agent presentation. In: Proceedings of the 10th International Conference on Intelligent Virtual Agents (IVA 2010). Springer, Heidelberg (2010)

    Google Scholar 

  17. Jokinen, K.: Gaze and Gesture Activity in Communication. In: Stephanidis, C. (ed.) UAHCI 2009. LNCS, vol. 5615, pp. 537–546. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  18. Masuko, T., Kobayashi, T., Tamura, M., Masubuchi, J., Tokuda, K.: Text-to-visual speech synthesis based on parameter generation from HMM. In: Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 6, pp. 3745–3748 (1998)

    Google Scholar 

  19. Eliens, A., Huang, Z., Hoorn, J.F., Visser, C.T.: ECA Perspectives - Requirements, Applications, Technology. Dagstuhl Seminar Proceedings 04121, Evaluating Embodied Conversational Agents (2006)

    Google Scholar 

  20. Mlakar, I., Rojc, M.: EVA: expressive multipart virtual agent performing gestures and emotions. International Journal of Mathematics and Computers in Simulation 5(1), 36–44 (2011)

    Google Scholar 

  21. Gebhard, P.: Alma: a layered model of affect. In: Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 29–36. ACM Press, New York (2005)

    Chapter  Google Scholar 

  22. Kranstedt, A., Kopp, S., Wachsmuth, I.: MURML: A Multimodal Utterance Representation Markup Language for Conversational Agents. In: AAMAS 2002 Workshop Embodied Conversational Agents (2002)

    Google Scholar 

  23. Bevacqua, E., Mancini, M., Niewiadomski, R., Pelachaud, C.: An expressive ECA showing complex emotions. In: Proceedings of the AISB Annual Convention, Newcastle, UK, pp. 208–216 (2007)

    Google Scholar 

  24. Martin, J., Abrilian, C., Devillers, S., Lamolle, L., Mancini, M., Pelachaud, C.: Levels of Representation in the Annotation of Emotion for the Specification of Expressivity in ECAs. In: Panayiotopoulos, T., Gratch, J., Aylett, R.S., Ballin, D., Olivier, P., Rist, T. (eds.) IVA 2005. LNCS (LNAI), vol. 3661, pp. 405–417. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  25. Rojc, M., Kačič, Z.: Time and space-efficient architecture for a corpus-based text-to-speech synthesis system. Speech Communication 49(3), 230–249 (2007)

    Article  Google Scholar 

  26. Kopp, S., Krenn, B., Marsella, S., Marshall, A., Pelachaud, C., Pirker, H., Thórisson, K., Vilhjalmsson, H.: Towards a Common Framework for Multimodal Generation in ECAs: The Behavior Markup Language. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 205–217. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  27. Martin, J., Niewiadomski, R., Devillers, L., Buisine, S., Pelachaud, C.: Multimodal complex emotions: gesture expressivity and blended facial expressions. International Journal of Humanoid Robotics (IJHR), Special Issue Achieving Human-Like Qualities in Interactive Virtual and Physical Humanoids 3(3), 269–291 (2006)

    Google Scholar 

  28. Poggi, I.: Mind markers. In: Trigo, N., Rector, M., Poggi, I. (eds.) Gestures. Meaning and Use, University Fernando Pessoa Press (2002)

    Google Scholar 

  29. Kipp, M., Neff, M., Albrecht, I.: An annotation scheme for conversational gestures: how to economically capture timing and form. In: Language Resources and Evaluation. Springer, Netherlands (2007)

    Google Scholar 

  30. Goslin, M., Mine, M.R.: The Panda3D Graphics Engine. Computer 37(10), 112–114 (2004)

    Article  Google Scholar 

  31. Stern, J., Boyer, D., Schroeder, D.: Blink rate: a possible measure of fatigue. Hum. Factors 36(2), 285–297 (1994)

    Google Scholar 

  32. Pelachaud, C., Badler, N., Steedman, M.: Generating Facial Expressions for Speech. Cognitive Science 20(1), 1–46 (1996)

    Article  Google Scholar 

  33. Albrecht, I., Haber, J., Seidel, H.P.: Automatic generation of non-verbal facial expressions from speech. In: Proceedings of the Computer Graphics International, pp. 283–293 (2002)

    Google Scholar 

  34. Clark, F.J., von Euler, C.: On the regulation of depth and rate of breathing. Journal of Physiol. 222(2), 267–295 (1972)

    Article  Google Scholar 

  35. Ostermann, J.: Animation of synthetic faces in MPEG-4. In: Proceedings of Computer Animation 1998, pp. 49–55 (1998)

    Google Scholar 

  36. Pandzic, I.S., Forchheimer, R.: MPEG-4 facial animation: the standard, implementation and applications. Wiley, Chichester (2002)

    Book  Google Scholar 

  37. Bentivoglio, A.R., Bressman, S.B., Cassetta, E., Carretta, D., Tonali, P., Albanese, A.: Analysis of blink rate patterns in normal subjects. Movement Disorders 12, 1028–1034 (1997)

    Article  Google Scholar 

  38. Carney, L.G., Hill, R.M.: The nature of normal blinking patterns. Acta Ophthalmologica 60, 427–433 (1982)

    Article  Google Scholar 

  39. Nass, C., Isbister, K., Lee, E.J.: Truth is beauty: Researching embodied conversational agents. In: Cassell, J., Sullivan, J., Prevost, S., Churchill, E. (eds.) Embodied Conversational Agents, pp. 374–402. MIT Press, Cambridge (2000)

    Google Scholar 

  40. Kopp, S., Allwood, J., Grammer, K., Ahlsen, E., Stocksmeier, T.: Modeling Embodied Feedback with Virtual Humans. In: Wachsmuth, I., Knoblich, G. (eds.) ZiF Research Group International Workshop. LNCS (LNAI), vol. 4930, pp. 18–37. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Mlakar, I., Rojc, M. (2011). Towards ECA’s Animation of Expressive Complex Behaviour. In: Esposito, A., Vinciarelli, A., Vicsi, K., Pelachaud, C., Nijholt, A. (eds) Analysis of Verbal and Nonverbal Communication and Enactment. The Processing Issues. Lecture Notes in Computer Science, vol 6800. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25775-9_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-25775-9_19

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-25774-2

  • Online ISBN: 978-3-642-25775-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics