skip to main content
10.1145/3581783.3610943acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
abstract

MuSe 2023 Challenge: Multimodal Prediction of Mimicked Emotions, Cross-Cultural Humour, and Personalised Recognition of Affects

Published:27 October 2023Publication History

ABSTRACT

The 4th Multimodal Sentiment Analysis Challenge (MuSe) focuses on Multimodal Prediction of Mimicked Emotions, Cross-Cultural Humour, and Personalised Recognition of Affects. The workshop takes place in conjunction with ACM Multimedia'23. We provide three datasets as part of the challenge: (i) The Hume-Vidmimic dataset which offers 30+ hours of expressive behaviour data from 557 participants. It involves mimicking and rating emotions: Approval, Disappointment, and Uncertainty. This multimodal resource is valuable for studying human emotional expressions. (ii) The 2023 edition of the Passau Spontaneous Football Coach Humor (Passau-SFCH) dataset comprises German football press conference recordings within the training set, while videos of English football press conferences are included in the unseen test set. This unique configuration offers a cross-cultural evaluation environment for humour recognition. (iii) The Ulm-Trier Social Stress Test (Ulm-TSST) dataset contains recordings of subjects under stress. It involves arousal and valence signals, with some test labels provided to aid personalisation. Based on these datasets, we formulate three multimodal affective computing challenges: (1) Mimicked Emotions Sub-Challenge (MuSe-Mimic) for categorical emotion prediction, (2) Cross-Cultural Humour Detection Sub-Challenge (MuSe-Humour) for cross-cultural humour detection, and (3) Personalisation Sub-Challenge (MuSe-Personalisation) for personalised dimensional emotion recognition. In this summary, we outline the challenge's motivation, participation guidelines, conditions, and results.

References

  1. Shahin Amiriparian. 2022. The Dos and Don'ts of Affect Analysis. In Proc. of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge. ACM, Ottawa, Canada, 3--3.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Shahin Amiriparian, Nicholas Cummins, Sandra Ottl, Maurice Gerczuk, and Björn Schuller. 2017. Sentiment Analysis Using Image-based Deep Spectrum Features. In Proc. of 2nd International Workshop on Automatic Sentiment Analysis in the Wild (WASA 2017) held in conjunction with ACII 2017. AAAC, IEEE, San Antonio, TX, 26--29.Google ScholarGoogle ScholarCross RefCross Ref
  3. Shahin Amiriparian, Tobias Hübner, Vincent Karas, Maurice Gerczuk, Sandra Ottl, and Björn W. Schuller. 2022. DeepSpectrumLite: A Power-Efficient Transfer Learning Framework for Embedded Speech and Audio Processing From Decentralized Data. Frontiers in Artificial Intelligence, Vol. 5 (2022), 10.Google ScholarGoogle ScholarCross RefCross Ref
  4. Shahin Amiriparian, Bjorn W Schuller, Nabiha Asghar, Heiga Zen, and Felix Burkhardt. 2023. Guest Editorial: Special Issue on Affective Speech and Language Synthesis, Generation, and Conversion. IEEE Transactions on Affective Computing, Vol. 14, 01 (2023), 3--5.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Erik Cambria, Dipankar Das, Sivaji Bandyopadhyay, and Antonio Feraco. 2017. Affective computing and sentiment analysis. In A practical guide to sentiment analysis. Springer, 1--10.Google ScholarGoogle Scholar
  6. Lukas Christ, Shahin Amiriparian, Alice Baird, Alexander Kathan, Niklas Müller, Steffen Klug, Chris Gagne, Panagiotis Tzirakis, Lukas Stappen, Eva-Maria Meßner, Andreas König, Alan Cowen, Erik Cambria, and Björn W. Schuller. 2023 a. The MuSe 2023 Multimodal Sentiment Analysis Challenge: Mimicked Emotions, Cross-Cultural Humour, and Personalisation. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.Google ScholarGoogle Scholar
  7. Lukas Christ, Shahin Amiriparian, Alice Baird, Panagiotis Tzirakis, Alexander Kathan, Niklas Müller, Lukas Stappen, Eva-Maria Meßner, Andreas König, Alan Cowen, Erik Cambria, and Björn W. Schuller. 2022. The MuSe 2022 Multimodal Sentiment Analysis Challenge: Humor, Emotional Reactions, and Stress. In Proc. of the 3rd Multimodal Sentiment Analysis Challenge. ACM, Lisbon, Portugal. Workshop held at ACM Multimedia 2022, to appear.Google ScholarGoogle Scholar
  8. Lukas Christ, Shahin Amiriparian, Alexander Kathan, Niklas Müller, Andreas König, and Björn W Schuller. 2023 b. Towards Multimodal Prediction of Spontaneous Humour: A Novel Dataset and First Results. arXiv preprint arXiv:2209.14272 (2023).Google ScholarGoogle Scholar
  9. Chaoyue Ding, Daoming Zong, Baoxiang Li, Song Zhang, Xiaoxu Zhu, Guiping Zhong, and Dinghao Zhou. 2023. Multimodal Sentiment Analysis via Efficient Multimodal Transformer and Task-Aware Adaptive Training Loss. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.Google ScholarGoogle Scholar
  10. Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. GLM: General Language Model Pretraining with Autoregressive Blank Infilling. In Proc. of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Dublin, Ireland, 320--335. https://doi.org/10.18653/v1/2022.acl-long.26Google ScholarGoogle ScholarCross RefCross Ref
  11. Alexander Kathan, Shahin Amiriparian, Lukas Christ, Andreas Triantafyllopoulos, Niklas Müller, Andreas König, and Björn W Schuller. 2022. A personalised approach to audiovisual humour recognition and its individual-level fairness. In Proc. of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge. 29--36.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Andreas König, Lorenz Graf-Vlachy, Jonathan Bundy, and Laura M Little. 2020. A blessing and a curse: How CEOs' trait empathy affects their management of organizational crises. Academy of Management Review, Vol. 45, 1 (2020), 130--153.Google ScholarGoogle ScholarCross RefCross Ref
  13. Jia Li, Wei Qian, Kun Li, Qi Li, Dan Guo, and Meng Wang. 2023 a. Exploiting Diverse Feature for Multimodal Sentiment Analysis. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Qi Li, Shulei Tang, Feixiang Zhang, Ruotong Wang, Yangyang Xu, Zhuoer Zhao, Xiao Sun, and Meng Wang. 2023 b. Temporal-aware Multimodal Feature Fusion for Sentiment Analysis. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Qi Li, Yangyang Xu, Zhuoer Zhao, Shulei Tang, Feixiang Zhang, Ruotong Wang, Xiao Sun, and Meng Wang. 2023 c. JTMA: Joint multimodal feature fusion and Temporal Multi-head Attention for Humor Detection. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Misha Libman and Gelareh Mohammadi. 2023. ECG-Coupled Multimodal Approach for Stress Detection. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Inc., 4765--4774. http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdfGoogle ScholarGoogle Scholar
  18. Ho-Min Park, Ganghyun Kim, Arnout Van Messem, and Wesley De Neve. 2023. MuSe-Personalization 2023: Feature Engineering, Hyperparameter Optimization, and Transformer-Encoder Re-discovery. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Soujanya Poria, Erik Cambria, Rajiv Bajpai, and Amir Hussain. 2017. A review of affective computing: From unimodal analysis to multimodal fusion. Information fusion, Vol. 37 (2017), 98--125.Google ScholarGoogle Scholar
  20. Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak supervision. In International Conference on Machine Learning. PMLR, 28492--28518.Google ScholarGoogle Scholar
  21. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilić, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, Francc ois Yvon, Matthias Gallé, et al. 2022. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100 (2022).Google ScholarGoogle Scholar
  22. Björn W. Schuller, Anton Batliner, Shahin Amiriparian, Alexander Barnhill, Maurice Gerczuk, Andreas Triantafyllopoulos, Alice Baird, Panagiotis Tzirakis, Chris Gagne, Alan S. Cowen, Nikola Lackovic, Marie-José Caraty, and Claude Montacié. 2023. The ACM Multimedia 2023 Computational Paralinguistics Challenge: Emotion Share & Requests. In Proc. of the 31. ACM International Conference on Multimedia, MM 2023. ACM, ACM, Ottawa, Canada. 5 pages, to appear.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Lukas Stappen, Alice Baird, Lukas Christ, Lea Schumann, Benjamin Sertolli, Eva-Maria Messner, Erik Cambria, Guoying Zhao, and Björn W Schuller. 2021. The MuSe 2021 multimodal sentiment analysis challenge: sentiment, emotion, physiological-emotion, and stress. In Proc. of the 2nd on Multimodal Sentiment Analysis Challenge. ACM, New York, NY, USA, 5--14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Haiyang Sun, Zhuofan Wen, Mingyu Xu, Zheng Lian, Licai Sun, Bin Liu, and Jianhua Tao. 2023. Exclusive Modeling for MuSe-Personalisation Challenge. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In International conference on machine learning. PMLR, 3319--3328.Google ScholarGoogle Scholar
  26. Grósz Tamás, Anja Virkkunen, Dejan Porjazovski, and Mikko Kurimo. 2023. Discovering Relevant Sub-spaces of BERT, Wav2Vec 2.0, ELECTRA and ViT Embeddings for Humor and Mimicked Emotion Recognition with Integrated Gradients. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.Google ScholarGoogle Scholar
  27. Heng Xie, Jizhou Cui, Yuhang Cao, Junjie Chen, Jianhua Tao, Cunhang Fan, Xuefei Liu, Zhengqi Wen, Heng Lu, Yuguang Yang, Zhao Lv, and Yongwei Li. 2023. Multimodal Cross-Lingual Features and Weight Fusion for Cross-Cultural Humor Detection. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Mingyu Xu, Shun Chen, Zheng Lian, and Bin Liu. 2023. Humor Detection System for MuSE 2023: Contextual Modeling, Pesudo Labelling, and Post-smoothing. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Fanglei Xue, Qiangchang Wang, Zichang Tan, Zhongsong Ma, and Guodong Guo. 2022. Vision transformer with attentive pooling for robust facial expression recognition. IEEE Transactions on Affective Computing (2022).Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Guofeng Yi, Yuguang Yang, Yu Pan, Yuhang Cao, Jixun Yao, Xiang Lv, Cunhang Fan, Zhao Lv, Jianhua Tao, Shan Liang, and Heng Lu. 2023. Exploring the Power of Cross-Contextual Large Language Model in Mimic Emotion Prediction. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Jun Yu, Wangyuan Zhu, Jichao Zhu, Xiaxin Shen, Jianqing Sun, and Jiaen Liang. 2023. MMT-GD: Multi-Modal Transformer with Graph Distillation for Cross-Cultural Humor Detection. In Proc. of MuSe '23. ACM, Ottawa, Canada. to appear.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. MuSe 2023 Challenge: Multimodal Prediction of Mimicked Emotions, Cross-Cultural Humour, and Personalised Recognition of Affects

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader