skip to main content
10.1145/3423327acmconferencesBook PagePublication PagesmmConference Proceedingsconference-collections
MuSe'20: Proceedings of the 1st International on Multimodal Sentiment Analysis in Real-life Media Challenge and Workshop
ACM2020 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
MM '20: The 28th ACM International Conference on Multimedia Seattle WA USA 16 October 2020
ISBN:
978-1-4503-8157-4
Published:
15 October 2020
Sponsors:
Recommend ACM DL
ALREADY A SUBSCRIBER?SIGN IN

Reflects downloads up to 05 Mar 2025Bibliometrics
Skip Abstract Section
Abstract

It is our great pleasure to welcome you to the 1st Multimodal Sentiment Analysis Challenge and Workshop (MuSe 2020), held in conjunction with the ACM Multimedia 2020. The MuSe challenge and associated workshop continues to push the boundaries of integrated audio-visual and textual based sentiment analysis and emotion sensing. In its first edition, we posed the problem of the prediction of continuous valued dimensional affect, the novel dimension of trustworthiness, and detecting 10-class domain-specific topics as the target of discrete emotion classes on a large and natural set of user-generated data.

The mission of the MuSe Challenge and Workshop is to provide a common benchmark for individual multimodal information processing and to bring together the symbolic-based Sentiment Analysis and the signal-based Affective Computing communities, to compare the merits of multimodal fusion for the three core modalities under well-defined conditions. Another motivation is the need to advance sentiment and emotion recognition systems to be able to deal with unsegmented and previously unexplored naturalistic behaviour in large amounts of in-the-wild data, as this is exactly the type of data that we face in real life. As you will see, these goals have been reached with the selection of the data and the (challenge) contributions.

The call for participation and papers attracted registrations of 21 teams from Asia, Europe, and North America. The programme committee accepted 5 papers including the baseline paper. For predicting the time-continuous emotional dimensions, the best models boosted the CCC on sentiment/ valence by 0.36 (0.2431 to 0.5996) and on arousal by 0.19 (0.2834 to 0.4726) compared to the baseline. We hope that these proceedings will serve as a valuable reference for researchers and developers in the area of multimodal sentiment analysis and audio-visual emotion recognition.

Skip Table Of Content Section
SESSION: Keynotes
keynote
Vehicle Interiors as Sensate Environments

The research field of biologically inspired and cognitive systems is currently gaining increasing interest. However, modern vehicles and their architectures are still dominated by traditional, engineered systems. This talk will give an industrial ...

keynote
Personalized Machine Learning for Human-centered Machine Intelligence

Recent developments in AI and Machine Learning (ML) are revolutionizing traditional technologies for health and education by enabling more intelligent therapeutic and learning tools that can automatically perceive and predict user's behavior (e.g. from ...

SESSION: Invited Talks
abstract
Multimodal Social Media Mining

Social media have transformed the Web into an interactive sharing platform where users upload data and media, comment on, and share this content within their social circles. The large-scale availability of user-generated content in social media ...

abstract
Extending Multimodal Emotion Recognition with Biological Signals: Presenting a Novel Dataset and Recent Findings

Multimodal fusion has shown great promise in recent literature, particularly for audio dominant tasks. In this talk, we outline a the finding from a recently developed multimodal dataset, and discuss the promise of fusing biological signals with speech ...

abstract
End2You: Multimodal Profiling by End-to-End Learning and Applications

Multimodal profiling is a fundamental component towards a complete interaction between human and machine. This is an important task for intelligent systems as they can automatically sense and adapt their responses according to the human behavior. The ...

SESSION: Paper Presentations
research-article
Unsupervised Representation Learning with Attention and Sequence to Sequence Autoencoders to Predict Sleepiness From Speech

Motivated by the attention mechanism of the human visual system and recent developments in the field of machine translation, we introduce our attention-based and recurrent sequence to sequence autoencoders for fully unsupervised representation learning ...

research-article
Multi-modal Fusion for Video Sentiment Analysis

Automatic sentiment analysis can support revealing a subject's emotional state and opinion tendency toward an entity. In this paper, we present our solutions for the MuSe-Wild sub-challenge of Multimodal Sentiment Analysis in Real-life Media (MuSe) ...

research-article
Multi-modal Continuous Dimensional Emotion Recognition Using Recurrent Neural Network and Self-Attention Mechanism

Automatic perception and understanding of human emotion or sentiment has a wide range of applications and has attracted increasing attention nowadays. The Multimodal Sentiment Analysis in Real-life Media (MuSe) 2020 provides a testing bed for ...

research-article
MuSe 2020 Challenge and Workshop: Multimodal Sentiment Analysis, Emotion-target Engagement and Trustworthiness Detection in Real-life Media: Emotional Car Reviews in-the-wild

Multimodal Sentiment Analysis in Real-life Media (MuSe) 2020 is a Challenge-based Workshop focusing on the tasks of sentiment recognition, as well as emotion-target engagement and trustworthiness detection by means of more comprehensively integrating ...

research-article
AAEC: An Adversarial Autoencoder-based Classifier for Audio Emotion Recognition

In recent years, automatic emotion recognition has attracted the attention of researchers because of its great effects and wide implementations in supporting humans' activities. Given that the data about emotions is difficult to collect and organize ...

Contributors
  • Imperial College London
  • Delft University of Technology
  • Nanyang Technological University
  • Information Technologies Institute
  • University of Augsburg

Recommendations

Acceptance Rates

Overall Acceptance Rate 14 of 17 submissions, 82%
YearSubmittedAcceptedRate
MuSe' 22171482%
Overall171482%