Abstract
This book aims to investigate the features that are at the core of human interactions to model the involved emotional processes, in order to design and develop autonomous systems and algorithms able to detect early signs of changes, in moods and emotional states. The attention is focused on emotional social features and the human’s ability to decode and encode emotional social cues while interacting. In order to do this, the book will propose a series of investigations that gather behavioral data from speech, handwriting, facial, vocal and gestural expressions. This is done through the definition of behavioral tasks that may serve to produce changes in the perception of emotional social cues. Specific scenarios are designed to assess users’ emphatic and social competencies. The collected data are used to gain knowledge on how behavioral and interactional features are affected by individuals’ moods and emotional states. This information can be exploited to devise multidimensional models of multimodal interactional features that will serve for measuring the degree of empathic relationships developed between individuals and allow the design and development of cost-effective emotion-aware technologies to be used in applicative contexts such as remote health care services and robotic assistance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Atassi H, Esposito A (2008) Speaker independent approach to the classification of emotional vocal expressions. In: Proceedings of IEEE conference on tools with artificial intelligence (ICTAI 2008), vol 1. Dayton, 3–5 Nov 2008, pp 487–494
Atassi H, Esposito A, Smekal Z (2011) Analysis of high-level features for vocal emotion recognition. In: Proceedings of 34th IEEE international conference on telecom and signal processing (TSP), Budapest, 18–20 Aug 2011, pp 361–366
Atassi H, Riviello MT, Smékal Z, Hussain A, Esposito A (2010) Emotional vocal expressions recognition using the cost 2102 italian database of emotional speech. In: Esposito A et al. (eds) Development of multimodal interfaces: active listening and synchrony, LNCS 5967, Springer, Berlin, pp 255–267
Belpaeme T, Adams S, De Greeff J, Di Nuovo A, Morse A, Cangelosi A (2016) Social development of artificial cognition. This volume
Benyon D, Turner P, Turner S (2005) Designing interactive systems: people, activities, contexts, technologies. Pearson Education, Harlow
Castellano G, Kessous L, Caridakis G (2008) Emotion recognition through multiple modalities: face, body, gesture, speech. Affect and emotion in human-computer interaction. Springer, Berlin, pp 92–103
Cordasco G, Esposito M, Masucci F, Riviello MT, Esposito A, Chollet G, Schlögl S, Milhorat P, Pelosi G (2014) Assessing voice user interfaces: the vAssist system prototype. In: Proceedings of the 5th IEEE international conference on cognitive infocommunications, Vietri sul Mare, 5–7 Nov 2014, pp 91–96
Corrigan LJ, Peters C, Küster D, Castellano G (2016) Engagement perception and generation for social robots and virtual agents. This volume
Dupont S, Çakmak H, Curran W, Dutoit T, Hofmann J, McKeown G, Pietquin O, Platt T, Ruch W, Urbain J (2016) Laughter research: a review of the ILHAIRE project. This volume
Esposito A (2013) The situated multimodal facets of human communication. In Rojc M, Campbell N (Eds), Coverbal synchrony in human-machine interaction, chap. 7. CRC Press, Taylor & Francis Group, Boca Raton, pp 173–202
Esposito A, Esposito AM (2012) On the recognition of emotional vocal expressions: motivations for an holistic approach. Cogn Process 13(2):541–550
Esposito A, Fortunati L, Lugano G (2014) Modeling emotion, behaviour and context in socially believable robots and ICT interfaces. Cogn Comput 6(4):623–627
Esposito A, Esposito AM, Vogel C (2015) Needs and challenges in human computer interaction for processing social emotional information. Patter Recognit Lett 66:41–51
Fortunati L, Esposito A, Lugano G (2015) Beyond industrial robotics: social robots entering public and domestic spheres. Inf Soc: Int J 31(3):229–236
Gangamohan P, Kadiri SR, Yegnanarayana B (2016) Analysis of emotional speech: A review. This volume
Hunyadi L, István Szekrényes I, Kiss H (2016) Prosody enhances cognitive infocommunication: materials from the HuComTech corpus. This volume
Lewandowska-Tomaszczyk B, Wilson PA (2016) Physical and moral disgust with socially believable behaving systems in different cultures. This volume
Maricchiolo F, Gnisci A, Cerasuolo M, Ficca, Bonaiuto M (2016) Speaker’s hand gestures can modulate receiver’s negative reactions to a disagreeable verbal message. This volume
Meudt S, Schmidt-Wack M, Honold F, Schüssel F, Michael Weber M, Schwenker F, Palm G (2016) Going further in affective computing: how emotion recognition can improve adaptive user interaction. This volume
Milhorat P, Schlögl S, Chollet G, Boudyy J, Esposito A, Pelosi G (2014) Building the next generation of personal digital assistants. In: Proceedings of the 1st IEEE international conference on advanced technologies for signal and image processing - ATSIP’2014, Sousse, 17–19 March 2014, pp 458–463
Placidi G, Avola D, Petracca A, Sgallari F, Spezialetti M (2015) Basis for the implementation of an EEG-based single-trial binary brain computer interface through the disgust produced by remembering unpleasant odors. Neurocomputing 160:308–318
Ringeval F, Eyben F, Kroupi E, Yuce A, Thiran JP, Ebrahimi T, Lalanne D, Schuller B (2014) Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data. Pattern Recogn Lett Elsevier 66(C):22–30
Schuller B (2015) Deep learning our everyday emotions: a short overview. In: Bassis S et al (eds) Advances in neural networks: computational and theoretical issues, vol 37. SIST Series, Springer, Berlin, pp 339–346
van der Veer GC, Tauber MJ, Waern Y, van Muylwijk B (1985) On the interaction between system and user characteristics. Behav Inf Technol 4:284–308
Vernon D, Thill S, and Ziemke T (2016) The role of intention in cognitive robotics. This volume
Vinciarelli A, Esposito A, André E, Bonin F, Chetouani M, Cohn JF, Cristan M, Fuhrmann F, Gilmartin E, Hammal Z, Heylen D, Kaiser R, Koutsombogera M, Potamianos A, Renals S, Riccardi G, Salah AA (2015) Open challenges in modelling, analysis and synthesis of human behaviour in human-human and human-machine interactions. Cogn Comput 7(4):397–413
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Esposito, A., Jain, L.C. (2016). Modeling Emotions in Robotic Socially Believable Behaving Systems. In: Esposito, A., Jain, L. (eds) Toward Robotic Socially Believable Behaving Systems - Volume I . Intelligent Systems Reference Library, vol 105. Springer, Cham. https://doi.org/10.1007/978-3-319-31056-5_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-31056-5_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-31055-8
Online ISBN: 978-3-319-31056-5
eBook Packages: EngineeringEngineering (R0)