ABSTRACT
Automatic affect analysis and understanding has become a well established research area in the last two decades. Recent works have started moving from individual to group scenarios. However, little attention has been paid to comparing the affect expressed in individual and group settings. This paper presents a framework to investigate the differences in affect recognition models along arousal and valence dimensions in individual and group settings. We analyse how a model trained on data collected from an individual setting performs on test data collected from a group setting, and vice versa. A third model combining data from both individual and group settings is also investigated. A set of experiments is conducted to predict the affective states along both arousal and valence dimensions on two newly collected databases that contain sixteen participants watching affective movie stimuli in individual and group settings, respectively. The experimental results show that (1) the affect model trained with group data performs better on individual test data than the model trained with individual data tested on group data, indicating that facial behaviours expressed in a group setting capture more variation than in an individual setting; and (2) the combined model does not show better performance than the affect model trained with a specific type of data (i.e., individual or group), but proves a good compromise. These results indicate that in settings where multiple affect models trained with different types of data are not available, using the affect model trained with group data is a viable solution.
- C. Busso, M. Bulut, C.-C. Lee, A. Kazemzadeh, E. Mower, S. Kim, J. N. Chang, S. Lee, and S. S. Narayanan. Iemocap: Interactive emotional dyadic motion capture database. Language Resources and Evaluation, 2008.Google ScholarCross Ref
- C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Trans. on Intelligent Systems and Technology, 2011. Google ScholarDigital Library
- A. Dhall and R. Goecke. A temporally piece-wise fisher vector approach for depression analysis. In Proc. of Int. Conf. on A ective Computing and Intelligent Interaction (ACII), 2015. Google ScholarDigital Library
- A. Dhall, R. Goecke, and T. Gedeon. Automatic group happiness intensity analysis. IEEE Trans. on Affective Computing, 2015.Google ScholarDigital Library
- H. Gunes and B. Schuller. Categorical and dimensional affect analysis in continuous input: Current trends and future directions. Image and Vision Computing, 2013. Google ScholarDigital Library
- S. Koelstra and I. Patras. Fusion of facial expressions and eeg for implicit affective tagging. Image and Vision Computing, 2013. Google ScholarDigital Library
- I. Leite, M. McCoy, D. Ullman, N. Salomons, and B. Scassellati. Comparing models of disengagement in individual and group interactions. In Proc. of ACM/IEEE Int. Conf. on Human-Robot Interaction, 2015. Google ScholarDigital Library
- W. Mou, O. Celiktutan, and H. Gunes. Group-level arousal and valence recognition in static images: Face, body and context. In Proc. of IEEE Int. Conf. on Automatic Face and Gesture Recognition and Workshops (FG), 2015.Google Scholar
- W. Mou, H. Gunes, and I. Patras. Automatic recognition of emotions and membership in group videos. In Proc. of IEEE Conf. on Computer Vision and Pattern Recognition and Workshops (CVPRW), 2016.Google ScholarCross Ref
- J. Sanchez, F. Perronnin, T. Mensink, and J. Verbeek. Image classification with the sher vector: Theory and practice. International Journal of Computer Vision, 2013. Google ScholarDigital Library
- E. Sariyanidi, H. Gunes, and A. Cavallaro. Automatic analysis of facial affect: A survey of registration, representation, and recognition. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2015.Google Scholar
- E. Sariyanidi, H. Gunes, M. Gokmen, and A. Cavallaro. Local Zernike Moment representation for facial affect recognition. In Proc. of Brithish Machine and Vision Conference (BMVC), 2013.Google ScholarCross Ref
- M. Soleymani, S. Koelstra, I. Patras, and T. Pun. Continuous emotion detection in response to music videos. In Proc. of IEEE Conf. on Automatic Face and Gesture Recognition and Workshops (FG), 2011.Google ScholarCross Ref
- H. Wang, A. Klaser, C. Schmid, and C.-L. Liu. Dense trajectories and motion boundary descriptors for action recognition. International Journal of Computer Vision, 2013.Google ScholarCross Ref
- X. Xiong and F. De la Torre. Supervised descent method and its applications to face alignment. In Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2013. Google ScholarDigital Library
- R. B. Zajonc et al. Social facilitation. Research Center for Group Dynamics, Institute for Social Research, University of Michigan, 1965.Google Scholar
- Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2009. Google ScholarDigital Library
- G. Zhao and M. Pietikainen. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2007. Google ScholarDigital Library
Index Terms
Alone versus In-a-group: A Comparative Analysis of Facial Affect Recognition
Recommendations
Alone versus In-a-group: A Multi-modal Framework for Automatic Affect Recognition
Recognition and analysis of human affect has been researched extensively within the field of computer science in the past two decades. However, most of the past research in automatic analysis of human affect has focused on the recognition of affect ...
Integrating Learning Styles and Affect with an Intelligent Tutoring System
MICAI '13: Proceedings of the 2013 12th Mexican International Conference on Artificial IntelligenceThis paper presents two software systems for visual affect and learning styles recognition. The first system recognizes Paul Ekman's seven basic emotions in student expressions which are surprise, fear, disgust, anger, happiness, sadness, and neutral. ...
Deep Neural Network Augmentation: Generating Faces for Affect Analysis
AbstractThis paper presents a novel approach for synthesizing facial affect; either in terms of the six basic expressions (i.e., anger, disgust, fear, joy, sadness and surprise), or in terms of valence (i.e., how positive or negative is an emotion) and ...
Comments