Abstract
Multimodal interfaces supporting ECAs enable the development of novel concepts regarding human-machine interaction interfaces and provide several communication channels such as: natural speech, facial expression, and different body gestures. This paper presents the synthesis of expressive behaviour within the realm of affective computing. By providing descriptions of different expressive parameters (e.g. temporal, spatial, power, and different degrees of fluidity) and the context of unplanned behaviour, it addresses the synthesis of expressive behaviour by enabling the ECA to visualize complex human-like body movements (e.g. expressions, emotional speech, hand and head gestures, gaze and complex emotions). Movements performed by our ECA EVA are reactive, not require extensive planning phases, and can be presented hieratically as a set of different events. The animation concepts prevent the synthesis of unnatural movements even when two or more behavioural events influence the same segments of the body (e.g. speech with different facial expressions).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Georgantas, G., Issarny, V., Cerisara, C.: Dynamic Synthesis of Natural Human-Machine Interfaces in Ambient Intelligence Environments. In: Ambient Intelligence, Wireless Networking, and Ubiquitous Computing. Artech House, Boston (2006)
Sato, E., Yamaguchi, T., Harashima, F.: Natural Interface Using Pointing Behavior for Human–Robot Gestural Interaction. Industrial Electronics 54(2), 1105–1112 (2007)
Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollia, S., Fellenz, W., Taylor, J.G.: Emotion recognition in human-computer interaction. IEEE Signal Processing Magazine 18(1), 32–80 (2001)
Daconta, M.C., Obrst, L.J., Smith, K.T.: The Semantic Web: A Guide to the Future of XML, Web Services, and Knowledge Management. Wiley, Chichester (2003)
Schoop, M., de Moor, A., Dietz, J.L.G.: The pragmatic web: a manifesto. Commun. ACM 49(5), 75–76 (2006)
Cosatto, E., Graf, H.: Sample-Based Synthesis of Photo-Realistic Talking Heads. In: Proceedings of the Computer Animation, p. 103 (1998)
Poggi, I., Pelachaud, C., De Rosis, F., Carofiglio, V., De Carolis, B.: Greta, a believable embodied conversational agent. In: Multimodal Intelligent Information Presentation Text, Speech and Language Technology, vol. 27 (2005)
Baldassarri, S., Cerezo, E., Seron, F.J.: Chaos and Graphics: Maxine: A platform for embodied animated agents. Computers and Graphics 32(4), 430–437 (2008)
Chuang, E., Bregler, C.: Mood swings: expressive speech animation. ACM Transactions on Graphics (TOG) 24(2), 331–347 (2005)
Abrilian, S., Devillers, L., Buisine, S., Martin, J.C.: EmoTV1: Annotation of Real-life Emotions for the Specification of Multimodal Affective Interfaces. HCI International (2005)
Malatesta, L., Raouzaiou, A., Karpouzis, K., Kollias, S.: Towards modeling embodied conversational agent character profiles using appraisal theory predictions in expression synthesis. Applied Intelligence 30(1), 58–64 (2009)
Zoric, G., Pandzic, I.S.: Towards Real-time Speech-based Facial Animation Applications built on HUGE architecture. In: Proceedings of International Conference on Auditory-Visual Speech Processing AVSP (2008)
Smid, K., Zoric, G., Pandzic, I.S.: HUGE: Universal Architecture for Statistically Based HUman Gesturing. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 256–269. Springer, Heidelberg (2006)
Bevacqua, E., Mancini, M., Niewiadomski, R., Pelachaud, C.: An expressive ECA showing complex emotions. In: Proceedings of the AISB Annual Convention (2007)
DeCarolis, B., Pelachaud, C., Poggi, I., Steedman, M.: APML, a mark-up language for believable behavior generation. In: Prendinger, H., Ishizuka, M. (eds.) Life-like Characters. Tools, Affective Functions and Applications, pp. 65–85. Springer, Heidelberg (2004)
Kipp, M., Heloir, A., Gebhard, P., Schroeder, M.: Realizing Multimodal Behavior: Closing the gap between behavior planning and embodied agent presentation. In: Proceedings of the 10th International Conference on Intelligent Virtual Agents (IVA 2010). Springer, Heidelberg (2010)
Jokinen, K.: Gaze and Gesture Activity in Communication. In: Stephanidis, C. (ed.) UAHCI 2009. LNCS, vol. 5615, pp. 537–546. Springer, Heidelberg (2009)
Masuko, T., Kobayashi, T., Tamura, M., Masubuchi, J., Tokuda, K.: Text-to-visual speech synthesis based on parameter generation from HMM. In: Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 6, pp. 3745–3748 (1998)
Eliens, A., Huang, Z., Hoorn, J.F., Visser, C.T.: ECA Perspectives - Requirements, Applications, Technology. Dagstuhl Seminar Proceedings 04121, Evaluating Embodied Conversational Agents (2006)
Mlakar, I., Rojc, M.: EVA: expressive multipart virtual agent performing gestures and emotions. International Journal of Mathematics and Computers in Simulation 5(1), 36–44 (2011)
Gebhard, P.: Alma: a layered model of affect. In: Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 29–36. ACM Press, New York (2005)
Kranstedt, A., Kopp, S., Wachsmuth, I.: MURML: A Multimodal Utterance Representation Markup Language for Conversational Agents. In: AAMAS 2002 Workshop Embodied Conversational Agents (2002)
Bevacqua, E., Mancini, M., Niewiadomski, R., Pelachaud, C.: An expressive ECA showing complex emotions. In: Proceedings of the AISB Annual Convention, Newcastle, UK, pp. 208–216 (2007)
Martin, J., Abrilian, C., Devillers, S., Lamolle, L., Mancini, M., Pelachaud, C.: Levels of Representation in the Annotation of Emotion for the Specification of Expressivity in ECAs. In: Panayiotopoulos, T., Gratch, J., Aylett, R.S., Ballin, D., Olivier, P., Rist, T. (eds.) IVA 2005. LNCS (LNAI), vol. 3661, pp. 405–417. Springer, Heidelberg (2005)
Rojc, M., Kačič, Z.: Time and space-efficient architecture for a corpus-based text-to-speech synthesis system. Speech Communication 49(3), 230–249 (2007)
Kopp, S., Krenn, B., Marsella, S., Marshall, A., Pelachaud, C., Pirker, H., Thórisson, K., Vilhjalmsson, H.: Towards a Common Framework for Multimodal Generation in ECAs: The Behavior Markup Language. In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 205–217. Springer, Heidelberg (2006)
Martin, J., Niewiadomski, R., Devillers, L., Buisine, S., Pelachaud, C.: Multimodal complex emotions: gesture expressivity and blended facial expressions. International Journal of Humanoid Robotics (IJHR), Special Issue Achieving Human-Like Qualities in Interactive Virtual and Physical Humanoids 3(3), 269–291 (2006)
Poggi, I.: Mind markers. In: Trigo, N., Rector, M., Poggi, I. (eds.) Gestures. Meaning and Use, University Fernando Pessoa Press (2002)
Kipp, M., Neff, M., Albrecht, I.: An annotation scheme for conversational gestures: how to economically capture timing and form. In: Language Resources and Evaluation. Springer, Netherlands (2007)
Goslin, M., Mine, M.R.: The Panda3D Graphics Engine. Computer 37(10), 112–114 (2004)
Stern, J., Boyer, D., Schroeder, D.: Blink rate: a possible measure of fatigue. Hum. Factors 36(2), 285–297 (1994)
Pelachaud, C., Badler, N., Steedman, M.: Generating Facial Expressions for Speech. Cognitive Science 20(1), 1–46 (1996)
Albrecht, I., Haber, J., Seidel, H.P.: Automatic generation of non-verbal facial expressions from speech. In: Proceedings of the Computer Graphics International, pp. 283–293 (2002)
Clark, F.J., von Euler, C.: On the regulation of depth and rate of breathing. Journal of Physiol. 222(2), 267–295 (1972)
Ostermann, J.: Animation of synthetic faces in MPEG-4. In: Proceedings of Computer Animation 1998, pp. 49–55 (1998)
Pandzic, I.S., Forchheimer, R.: MPEG-4 facial animation: the standard, implementation and applications. Wiley, Chichester (2002)
Bentivoglio, A.R., Bressman, S.B., Cassetta, E., Carretta, D., Tonali, P., Albanese, A.: Analysis of blink rate patterns in normal subjects. Movement Disorders 12, 1028–1034 (1997)
Carney, L.G., Hill, R.M.: The nature of normal blinking patterns. Acta Ophthalmologica 60, 427–433 (1982)
Nass, C., Isbister, K., Lee, E.J.: Truth is beauty: Researching embodied conversational agents. In: Cassell, J., Sullivan, J., Prevost, S., Churchill, E. (eds.) Embodied Conversational Agents, pp. 374–402. MIT Press, Cambridge (2000)
Kopp, S., Allwood, J., Grammer, K., Ahlsen, E., Stocksmeier, T.: Modeling Embodied Feedback with Virtual Humans. In: Wachsmuth, I., Knoblich, G. (eds.) ZiF Research Group International Workshop. LNCS (LNAI), vol. 4930, pp. 18–37. Springer, Heidelberg (2008)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Mlakar, I., Rojc, M. (2011). Towards ECA’s Animation of Expressive Complex Behaviour. In: Esposito, A., Vinciarelli, A., Vicsi, K., Pelachaud, C., Nijholt, A. (eds) Analysis of Verbal and Nonverbal Communication and Enactment. The Processing Issues. Lecture Notes in Computer Science, vol 6800. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25775-9_19
Download citation
DOI: https://doi.org/10.1007/978-3-642-25775-9_19
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-25774-2
Online ISBN: 978-3-642-25775-9
eBook Packages: Computer ScienceComputer Science (R0)