Abstract
In this paper, we intended to test the hypothesis that there is a common denominator in complex emotions which human experts can identify, specifically in determining target scenarios. The underlining concept behind our work is that the system aims to compose complex emotions from simplex emotions. Our propose does not completely ignore the need for human operators since they (the human operators) make the final decision to confirm that the input video sequence corresponds to the simplex and complex emotion. However, the proposed system is designed in such a way that, theoretically once it is fully installed and operational, it would work on its own by automatically generating message alerts for the human operator. This entire process is based on the identification and labelling of what we call elementary or basic activities, namely events that are recognizable by artificial vision algorithms in this case HER. To test our hypothesis, we carried out an experiment on human emotions in Alzheimer’s interviews video. The results of experimentation showing that is possible identify complex emotions in video, include in important context how Alzheimer.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
More details available in: http://www.monperrus.net/martin/understanding+the+f1-score.
References
Russell, J., Ferdández-Dols, J.M.: The Psychology of Facial Expression. Cambridge University Press, New York (1997)
Porter, S., ten Brinke, L.: Reading between the lies: identifying concealed and falsified emotions in universal facial expressions. Psychol. Sci. 19(5), 508–514 (2008)
Wang, R., Fang, B.: Affective computing and biometrics based HCI surveillance system, Shangai (2008)
Pentland, A.: Looking at people: sensing for ubiquitous and wearable computing. IEEE Trans. Pattern Anal. Mach. Intell. 22, 107–119 (2000)
Pantic, M., Rothkrantz, L.: Affect-sensitive multimodal monitoring in ubiquitous computing: advances and challenges. ICEIS 1, 466–474 (2001)
Lisetti, C., Schiano, D.: Automatic facial expression interpretation: where human-computer interaction, artificial intelligence and cognitive science intersect. Pragmatics Cogn. 8, 185–235 (2000)
Morel, S., Beaucoisin, V., Perrin, M., George, N.: Very early modulation of brain responses to neutral faces by a single prior association with an emotional context: evidence from MEG. Neuroimage 61, 1461–1470 (2012)
Righart, R., Gelder, B.: Rapid influence of emotional scenes on encoding of facial expressions: an ERP study. Soc. Cogn. Affect. Neurosci. 3, 270–278 (2008)
Francois, A.R.J., Nevatia, R., Hobbs, J., Bolles, R., Smith, J.: VERL: an ontology framework for representing and annotating video events. IEEE MultiMedia 12, 76–86 (2005)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Gomez A, H.F., Malo M, E., Ruiz O, R.E., Martinez, C. (2020). Complex Human Emotions in Alzheimer’s Interviews: First Steps. In: Rocha, Á., Adeli, H., Reis, L., Costanzo, S., Orovic, I., Moreira, F. (eds) Trends and Innovations in Information Systems and Technologies. WorldCIST 2020. Advances in Intelligent Systems and Computing, vol 1159. Springer, Cham. https://doi.org/10.1007/978-3-030-45688-7_18
Download citation
DOI: https://doi.org/10.1007/978-3-030-45688-7_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-45687-0
Online ISBN: 978-3-030-45688-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)