skip to main content
10.1145/3503161.3551792acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
abstract

MuSe 2022 Challenge: Multimodal Humour, Emotional Reactions, and Stress

Published: 10 October 2022 Publication History

Abstract

The 3rd Multimodal Sentiment Analysis Challenge (MuSe) focuses on multimodal affective computing. The workshop is held in conjunction with ACM Multimedia'22. Three datasets are provided as part of the challenge: (i) the Passau Spontaneous Football Coach Humor (Passau-SFCH) dataset which contains humour-tagged audio-visual data of German football coaches, (ii) the Hume-Reaction dataset, which contains annotations on how people respond to emotional stimuli in terms of seven different emotional expression intensities, and (iii) the Ulm-Trier Social Stress Test (Ulm-TSST) dataset, which consists of audio-visual recordings labelled with continuous emotion values of individuals in stressful circumstances. Based on these datasets three affective computing challenges are defined: 1) Humor Detection Sub-Challenge (MuSe-Humor), for spontaneous humour recognition, 2) Emotional Reactions Sub-Challenge (MuSe-Reaction), for prediction of seven fine-grained in-the-wild' emotions, and 3) Emotional Stress Sub-Challenge (MuSe-Stress), for continuous prediction of stressed emotion values. In this summary, we describe the motivation behind the challenge, participation and its conditions, as well as the outcomes. The complete MuSe'22 workshop proceedings are available at: https://dl.acm.org/doi/proceedings/10.1145/3551876

References

[1]
Shahin Amiriparian. 2019. Deep Representation Learning Techniques for Audio Signal Processing. Ph.D. Dissertation. Technische Universität München.
[2]
Shahin Amiriparian, Nicholas Cummins, Sandra Ottl, Maurice Gerczuk, and Björn Schuller. 2017. Sentiment Analysis Using Image-based Deep Spectrum Features. In Proceedings 2nd International Workshop on Automatic Sentiment Analysis in the Wild (WASA 2017) held in conjunction with the 7th biannual Conference on Affective Computing and Intelligent Interaction (ACII 2017). AAAC, IEEE, San Antonio, TX, 26--29.
[3]
Shahin Amiriparian, Maurice Gerczuk, Lukas Stappen, Alice Baird, Lukas Koebe, Sandra Ottl, and Björn Schuller. 2020. Towards Cross-Modal Pre-Training and Learning Tempo-Spatial Characteristics for Audio Recognition with Convolutional and Recurrent Neural Networks. EURASIP Journal on Audio, Speech, and Music Processing, Vol. 2020, 19 (2020), 1--11.
[4]
Shahin Amiriparian, Tobias Hübner, Vincent Karas, Maurice Gerczuk, Sandra Ottl, and Björn W. Schuller. 2022. DeepSpectrumLite: A Power-Efficient Transfer Learning Framework for Embedded Speech and Audio Processing From Decentralized Data. Frontiers in Artificial Intelligence, Vol. 5 (2022), 10. https://doi.org/10.3389/frai.2022.856232
[5]
Erik Cambria, Dipankar Das, Sivaji Bandyopadhyay, and Antonio Feraco. 2017. Affective computing and sentiment analysis. In A practical guide to sentiment analysis. Springer, 1--10.
[6]
Yekta Said Can, Bert Arnrich, and Cem Ersoy. 2019. Stress detection in daily life scenarios using smart phones and wearable sensors: A survey. Journal of Biomedical Informatics, Vol. 92 (2019), 22.
[7]
Lukas Christ, Shahin Amiriparian, Alice Baird, Panagiotis Tzirakis, Alexander Kathan, Niklas Müller, Lukas Stappen, Eva-Maria Meßner, Andreas König, Alan Cowen, Erik Cambria, and Björn W. Schuller. 2022. The MuSe 2022 Multimodal Sentiment Analysis Challenge: Humor, Emotional Reactions, and Stress. In Proceedings of the 3rd Multimodal Sentiment Analysis Challenge. Association for Computing Machinery, Lisbon, Portugal. Workshop held at ACM Multimedia 2022, to appear.
[8]
Maurice Gerczuk, Shahin Amiriparian, Sandra Ottl, and Björn Schuller. 2022. EmoNet: A Transfer Learning Framework for Multi-Corpus Speech Emotion Recognition. IEEE Transactions on Affective Computing, Vol. 13 (2022).
[9]
Clemens Kirschbaum, Karl-Martin Pirke, and Dirk H Hellhammer. 1993. The Trier Social Stress Test'--a tool for investigating psychobiological stress responses in a laboratory setting. Neuropsychobiology, Vol. 28, 1--2 (1993), 76--81.
[10]
Dominique Makowski, Tam Pham, Zen J Lau, Jan C Brammer, Francc ois Lespinasse, Hung Pham, Christopher Schölzel, and SH Chen. 2021. NeuroKit2: A Python toolbox for neurophysiological signal processing. Behavior research methods, Vol. 53, 4 (2021), 1689--1696.
[11]
Björn W. Schuller, Anton Batliner, Shahin Amiriparian, Christian Bergler, Maurice Gerczuk, Natalie Holz, Sebastian Bayerl, Korbinian Riedhammer, Adria Mallol-Ragolta, Maria Pateraki, Harry Coppock, Ivan Kiskin, and Stephen Roberts. 2022. The ACM Multimedia 2022 Computational Paralinguistics Challenge: Vocalisations, Stuttering, Activity, & Mosquitos. In Proceedings of the 30th ACM international conference on Multimedia, ACM MM 2022. ACM, ACM, Lisbon, Portugal. to appear.
[12]
Lukas Stappen, Alice Baird, Lukas Christ, Lea Schumann, Benjamin Sertolli, Eva-Maria Messner, Erik Cambria, Guoying Zhao, and Björn W Schuller. 2021. The MuSe 2021 multimodal sentiment analysis challenge: sentiment, emotion, physiological-emotion, and stress. In Proceedings of the 2nd on Multimodal Sentiment Analysis Challenge. Association for Computing Machinery, New York, NY, USA, 5--14.
[13]
Lukas Stappen, Alice Baird, Georgios Rizos, Panagiotis Tzirakis, Xinchen Du, Felix Hafner, Lea Schumann, Adria Mallol-Ragolta, Bjoern W. Schuller, Iulia Lefter, Erik Cambria, and Ioannis Kompatsiaris. 2020. MuSe 2020 Challenge and Workshop: Multimodal Sentiment Analysis, Emotion-Target Engagement and Trustworthiness Detection in Real-Life Media. In Proceedings of the 1st International on Multimodal Sentiment Analysis in Real-Life Media Challenge and Workshop. ACM, Association for Computing Machinery, New York, NY, USA, 35--44.

Cited By

View all
  • (2025)A twin disentanglement Transformer Network with Hierarchical-Level Feature Reconstruction for robust multimodal emotion recognitionExpert Systems with Applications: An International Journal10.1016/j.eswa.2024.125822264:COnline publication date: 10-Mar-2025
  • (2024)A multimodal shared network with a cross-modal distribution constraint for continuous emotion recognitionEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.108413133:PDOnline publication date: 1-Jul-2024
  • (2023)The MuSe 2023 Multimodal Sentiment Analysis Challenge: Mimicked Emotions, Cross-Cultural Humour, and PersonalisationProceedings of the 4th on Multimodal Sentiment Analysis Challenge and Workshop: Mimicked Emotions, Humour and Personalisation10.1145/3606039.3613114(1-10)Online publication date: 1-Nov-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MM '22: Proceedings of the 30th ACM International Conference on Multimedia
October 2022
7537 pages
ISBN:9781450392037
DOI:10.1145/3503161
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 10 October 2022

Check for updates

Author Tags

  1. affective computing
  2. challenge
  3. emotion recognition
  4. humor detection
  5. multimodal fusion
  6. multimodal sentiment analysis
  7. summary paper

Qualifiers

  • Abstract

Conference

MM '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)72
  • Downloads (Last 6 weeks)11
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)A twin disentanglement Transformer Network with Hierarchical-Level Feature Reconstruction for robust multimodal emotion recognitionExpert Systems with Applications: An International Journal10.1016/j.eswa.2024.125822264:COnline publication date: 10-Mar-2025
  • (2024)A multimodal shared network with a cross-modal distribution constraint for continuous emotion recognitionEngineering Applications of Artificial Intelligence10.1016/j.engappai.2024.108413133:PDOnline publication date: 1-Jul-2024
  • (2023)The MuSe 2023 Multimodal Sentiment Analysis Challenge: Mimicked Emotions, Cross-Cultural Humour, and PersonalisationProceedings of the 4th on Multimodal Sentiment Analysis Challenge and Workshop: Mimicked Emotions, Humour and Personalisation10.1145/3606039.3613114(1-10)Online publication date: 1-Nov-2023
  • (2023)Multimodal Cross-Lingual Features and Weight Fusion for Cross-Cultural Humor DetectionProceedings of the 4th on Multimodal Sentiment Analysis Challenge and Workshop: Mimicked Emotions, Humour and Personalisation10.1145/3606039.3613110(51-57)Online publication date: 1-Nov-2023
  • (2023)MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised LearningProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612836(9610-9614)Online publication date: 26-Oct-2023
  • (2023)Towards Learning Emotion Information from Short Segments of SpeechICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP49357.2023.10095892(1-5)Online publication date: 4-Jun-2023
  • (2023)Multimodal Feature Extraction and Fusion for Emotional Reaction Intensity Estimation and Expression Classification in Videos with Transformers2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)10.1109/CVPRW59228.2023.00620(5838-5844)Online publication date: Jun-2023
  • (2023)Multi-modal Emotion Reaction Intensity Estimation with Temporal Augmentation2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)10.1109/CVPRW59228.2023.00613(5777-5784)Online publication date: Jun-2023
  • (2022)Integrating Cross-modal Interactions via Latent Representation Shift for Multi-modal Humor DetectionProceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge10.1145/3551876.3554805(23-28)Online publication date: 10-Oct-2022
  • (2022)A Personalised Approach to Audiovisual Humour Recognition and its Individual-level FairnessProceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge10.1145/3551876.3554800(29-36)Online publication date: 10-Oct-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media