Export Citations
It is our great pleasure to welcome you to the 1st Multimodal Sentiment Analysis Challenge and Workshop (MuSe 2020), held in conjunction with the ACM Multimedia 2020. The MuSe challenge and associated workshop continues to push the boundaries of integrated audio-visual and textual based sentiment analysis and emotion sensing. In its first edition, we posed the problem of the prediction of continuous valued dimensional affect, the novel dimension of trustworthiness, and detecting 10-class domain-specific topics as the target of discrete emotion classes on a large and natural set of user-generated data.
The mission of the MuSe Challenge and Workshop is to provide a common benchmark for individual multimodal information processing and to bring together the symbolic-based Sentiment Analysis and the signal-based Affective Computing communities, to compare the merits of multimodal fusion for the three core modalities under well-defined conditions. Another motivation is the need to advance sentiment and emotion recognition systems to be able to deal with unsegmented and previously unexplored naturalistic behaviour in large amounts of in-the-wild data, as this is exactly the type of data that we face in real life. As you will see, these goals have been reached with the selection of the data and the (challenge) contributions.
The call for participation and papers attracted registrations of 21 teams from Asia, Europe, and North America. The programme committee accepted 5 papers including the baseline paper. For predicting the time-continuous emotional dimensions, the best models boosted the CCC on sentiment/ valence by 0.36 (0.2431 to 0.5996) and on arousal by 0.19 (0.2834 to 0.4726) compared to the baseline. We hope that these proceedings will serve as a valuable reference for researchers and developers in the area of multimodal sentiment analysis and audio-visual emotion recognition.
Proceeding Downloads
Vehicle Interiors as Sensate Environments
The research field of biologically inspired and cognitive systems is currently gaining increasing interest. However, modern vehicles and their architectures are still dominated by traditional, engineered systems. This talk will give an industrial ...
Personalized Machine Learning for Human-centered Machine Intelligence
Recent developments in AI and Machine Learning (ML) are revolutionizing traditional technologies for health and education by enabling more intelligent therapeutic and learning tools that can automatically perceive and predict user's behavior (e.g. from ...
Multimodal Social Media Mining
Social media have transformed the Web into an interactive sharing platform where users upload data and media, comment on, and share this content within their social circles. The large-scale availability of user-generated content in social media ...
Extending Multimodal Emotion Recognition with Biological Signals: Presenting a Novel Dataset and Recent Findings
Multimodal fusion has shown great promise in recent literature, particularly for audio dominant tasks. In this talk, we outline a the finding from a recently developed multimodal dataset, and discuss the promise of fusing biological signals with speech ...
End2You: Multimodal Profiling by End-to-End Learning and Applications
Multimodal profiling is a fundamental component towards a complete interaction between human and machine. This is an important task for intelligent systems as they can automatically sense and adapt their responses according to the human behavior. The ...
Unsupervised Representation Learning with Attention and Sequence to Sequence Autoencoders to Predict Sleepiness From Speech
Motivated by the attention mechanism of the human visual system and recent developments in the field of machine translation, we introduce our attention-based and recurrent sequence to sequence autoencoders for fully unsupervised representation learning ...
Multi-modal Fusion for Video Sentiment Analysis
Automatic sentiment analysis can support revealing a subject's emotional state and opinion tendency toward an entity. In this paper, we present our solutions for the MuSe-Wild sub-challenge of Multimodal Sentiment Analysis in Real-life Media (MuSe) ...
Multi-modal Continuous Dimensional Emotion Recognition Using Recurrent Neural Network and Self-Attention Mechanism
Automatic perception and understanding of human emotion or sentiment has a wide range of applications and has attracted increasing attention nowadays. The Multimodal Sentiment Analysis in Real-life Media (MuSe) 2020 provides a testing bed for ...
MuSe 2020 Challenge and Workshop: Multimodal Sentiment Analysis, Emotion-target Engagement and Trustworthiness Detection in Real-life Media: Emotional Car Reviews in-the-wild
- Lukas Stappen,
- Alice Baird,
- Georgios Rizos,
- Panagiotis Tzirakis,
- Xinchen Du,
- Felix Hafner,
- Lea Schumann,
- Adria Mallol-Ragolta,
- Bjoern W. Schuller,
- Iulia Lefter,
- Erik Cambria,
- Ioannis Kompatsiaris
Multimodal Sentiment Analysis in Real-life Media (MuSe) 2020 is a Challenge-based Workshop focusing on the tasks of sentiment recognition, as well as emotion-target engagement and trustworthiness detection by means of more comprehensively integrating ...
AAEC: An Adversarial Autoencoder-based Classifier for Audio Emotion Recognition
In recent years, automatic emotion recognition has attracted the attention of researchers because of its great effects and wide implementations in supporting humans' activities. Given that the data about emotions is difficult to collect and organize ...
Cited By
- Chen C and Zhang P Integrating Cross-modal Interactions via Latent Representation Shift for Multi-modal Humor Detection Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge, (23-28)
- Liu Y, Sun W, Zhang X and Qin Y Improving Dimensional Emotion Recognition via Feature-wise Fusion Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge, (55-60)
Recommendations
Acceptance Rates
Year | Submitted | Accepted | Rate |
---|---|---|---|
MuSe' 22 | 17 | 14 | 82% |
Overall | 17 | 14 | 82% |