skip to main content
10.1145/2964284.2967276acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
short-paper

Alone versus In-a-group: A Comparative Analysis of Facial Affect Recognition

Authors Info & Claims
Published:01 October 2016Publication History

ABSTRACT

Automatic affect analysis and understanding has become a well established research area in the last two decades. Recent works have started moving from individual to group scenarios. However, little attention has been paid to comparing the affect expressed in individual and group settings. This paper presents a framework to investigate the differences in affect recognition models along arousal and valence dimensions in individual and group settings. We analyse how a model trained on data collected from an individual setting performs on test data collected from a group setting, and vice versa. A third model combining data from both individual and group settings is also investigated. A set of experiments is conducted to predict the affective states along both arousal and valence dimensions on two newly collected databases that contain sixteen participants watching affective movie stimuli in individual and group settings, respectively. The experimental results show that (1) the affect model trained with group data performs better on individual test data than the model trained with individual data tested on group data, indicating that facial behaviours expressed in a group setting capture more variation than in an individual setting; and (2) the combined model does not show better performance than the affect model trained with a specific type of data (i.e., individual or group), but proves a good compromise. These results indicate that in settings where multiple affect models trained with different types of data are not available, using the affect model trained with group data is a viable solution.

References

  1. C. Busso, M. Bulut, C.-C. Lee, A. Kazemzadeh, E. Mower, S. Kim, J. N. Chang, S. Lee, and S. S. Narayanan. Iemocap: Interactive emotional dyadic motion capture database. Language Resources and Evaluation, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  2. C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Trans. on Intelligent Systems and Technology, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. A. Dhall and R. Goecke. A temporally piece-wise fisher vector approach for depression analysis. In Proc. of Int. Conf. on A ective Computing and Intelligent Interaction (ACII), 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. A. Dhall, R. Goecke, and T. Gedeon. Automatic group happiness intensity analysis. IEEE Trans. on Affective Computing, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. H. Gunes and B. Schuller. Categorical and dimensional affect analysis in continuous input: Current trends and future directions. Image and Vision Computing, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. S. Koelstra and I. Patras. Fusion of facial expressions and eeg for implicit affective tagging. Image and Vision Computing, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. I. Leite, M. McCoy, D. Ullman, N. Salomons, and B. Scassellati. Comparing models of disengagement in individual and group interactions. In Proc. of ACM/IEEE Int. Conf. on Human-Robot Interaction, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. W. Mou, O. Celiktutan, and H. Gunes. Group-level arousal and valence recognition in static images: Face, body and context. In Proc. of IEEE Int. Conf. on Automatic Face and Gesture Recognition and Workshops (FG), 2015.Google ScholarGoogle Scholar
  9. W. Mou, H. Gunes, and I. Patras. Automatic recognition of emotions and membership in group videos. In Proc. of IEEE Conf. on Computer Vision and Pattern Recognition and Workshops (CVPRW), 2016.Google ScholarGoogle ScholarCross RefCross Ref
  10. J. Sanchez, F. Perronnin, T. Mensink, and J. Verbeek. Image classification with the sher vector: Theory and practice. International Journal of Computer Vision, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. E. Sariyanidi, H. Gunes, and A. Cavallaro. Automatic analysis of facial affect: A survey of registration, representation, and recognition. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2015.Google ScholarGoogle Scholar
  12. E. Sariyanidi, H. Gunes, M. Gokmen, and A. Cavallaro. Local Zernike Moment representation for facial affect recognition. In Proc. of Brithish Machine and Vision Conference (BMVC), 2013.Google ScholarGoogle ScholarCross RefCross Ref
  13. M. Soleymani, S. Koelstra, I. Patras, and T. Pun. Continuous emotion detection in response to music videos. In Proc. of IEEE Conf. on Automatic Face and Gesture Recognition and Workshops (FG), 2011.Google ScholarGoogle ScholarCross RefCross Ref
  14. H. Wang, A. Klaser, C. Schmid, and C.-L. Liu. Dense trajectories and motion boundary descriptors for action recognition. International Journal of Computer Vision, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  15. X. Xiong and F. De la Torre. Supervised descent method and its applications to face alignment. In Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. R. B. Zajonc et al. Social facilitation. Research Center for Group Dynamics, Institute for Social Research, University of Michigan, 1965.Google ScholarGoogle Scholar
  17. Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. G. Zhao and M. Pietikainen. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Alone versus In-a-group: A Comparative Analysis of Facial Affect Recognition

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        MM '16: Proceedings of the 24th ACM international conference on Multimedia
        October 2016
        1542 pages
        ISBN:9781450336031
        DOI:10.1145/2964284

        Copyright © 2016 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 1 October 2016

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • short-paper

        Acceptance Rates

        MM '16 Paper Acceptance Rate52of237submissions,22%Overall Acceptance Rate995of4,171submissions,24%

        Upcoming Conference

        MM '24
        MM '24: The 32nd ACM International Conference on Multimedia
        October 28 - November 1, 2024
        Melbourne , VIC , Australia

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader