skip to main content
10.1145/3394171.3421901acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
abstract

Summary of MuSe 2020: Multimodal Sentiment Analysis, Emotion-target Engagement and Trustworthiness Detection in Real-life Media

Published: 12 October 2020 Publication History

Abstract

The first Multimodal Sentiment Analysis in Real-life Media (MuSe) 2020 was a Challenge-based Workshop held in conjunction with ACM Multimedia'20. It addresses three distinct 'in-the-wild` Sub-challenges: sentiment/ emotion recognition (MuSe-Wild), emotion-target engagement (MuSe-Target) and trustworthiness detection (MuSe-Trust). A large multimedia dataset MuSe-CaR was used, which was specifically designed with the intention of improving machine understanding approaches of how sentiment (e.g. emotion) is linked to a topic in emotional, user-generated reviews. In this summary, we describe the motivation, first of its kind 'in-the-wild` database, challenge conditions, participation, as well as giving an overview of utilised state-of-the-art techniques.

Supplementary Material

MP4 File (3394171.3421901.mp4)
The first Multimodal Sentiment Analysis in Real-life Media (MuSe) 2020 is a Challenge-based Workshop held in conjunction with ACM Multimedia'20. It addresses three distinct in-the-wild Sub-challenges: sentiment/ emotion recognition (MuSe-Wild), emotion-target engagement (MuSe-Target) and trustworthiness detection (MuSe-Trust). A large multimedia dataset MuSe-CAR was used, which was specifically designed with the intention of improving machine understanding approaches of how sentiment is linked to a topic in emotional, user-generated reviews. In this summary, we describe the motivation, the first of its kind in-the-wild database, the challenge conditions, the participation, as well as give an overview of utilised state-of-the-art techniques.

References

[1]
Wasifur Rahman, Md Kamrul Hasan, Sangwu Lee, AmirAli Bagher Zadeh, Chengfeng Mao, Louis-Philippe Morency, and Ehsan Hoque. 2020. Integrating Multimodal Information in Large Pretrained Transformers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics . 2359--2369.
[2]
Fabien Ringeval, Björn Schuller, Michel Valstar, Jonathan Gratch, Roddy Cowie, Stefan Scherer, Sharon Mozgai, Nicholas Cummins, Maximilian Schmitt, and Maja Pantic. 2017. Avec 2017: Real-life Depression, and Affect Recognition Workshop and Challenge. In Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge . 3--9.
[3]
Björn W Schuller, Anton Batliner, Christian Bergler, Eva-Maria Messner, Antonia Hamilton, Shahin Amiriparian, Alice Baird, Georgios Rizos, Maximilian Schmitt, Lukas Stappen, et almbox. 2020. The INTERSPEECH 2020 Computational Paralinguistics Challenge: Elderly Emotion, Breathing & Masks. Proceedings INTERSPEECH. Shanghai, China: ISCA (2020).
[4]
Mohammad Soleymani, David Garcia, Brendan Jou, Björn Schuller, Shih-Fu Chang, and Maja Pantic. 2017. A survey of Multimodal Sentiment Analysis. Image and Vision Computing, Vol. 65 (2017), 3--14.
[5]
Lukas Stappen, Alice Baird, Georgios Rizos, Panagiotis Tzirakis, Xinchen Du, Felix Hafner, Lea Schumann, Adria Mallol-Ragolta, Björn W Schuller, Iulia Lefter, Erik Cambria, and Ioannis Kompatsiaris. 2020. MuSe 2020 Challenge and Workshop: Multimodal Sentiment Analysis, Emotion-target Engagement and Trustworthiness Detection in Real-life Media. In 1st International Multimodal Sentiment Analysis in Real-life Media Challenge and Workshop, co-located with the 28th ACM International Conference on Multimedia (ACM MM). ACM.
[6]
Michel Valstar, Björn Schuller, Kirsty Smith, Florian Eyben, Bihan Jiang, Sanjay Bilakhia, Sebastian Schnieder, Roddy Cowie, and Maja Pantic. 2013. AVEC 2013: The Continuous Audio/Visual Emotion and Depression Recognition Challenge. In Proceedings of the 3rd ACM International Workshop on Audio/Visual Emotion Challenge . ACM, 3--10.
[7]
Amir Zadeh, Louis-Philippe Morency, Paul Pu Liang, Soujanya Poria, Jean-Benoit Delbrouck, Noé Tits, Mathilde Brousmiche, and Stéphane Dupont. 2020. Second Grand-Challenge and Workshop on Multimodal Language (Challenge-HML). In Second Grand-Challenge and Workshop on Multimodal Language (Challenge-HML), Amir Zadeh, Louis-Philippe Morency, Paul Pu Liang, and Soujanya Poria (Eds.). ACL, Seattle, USA.

Cited By

View all
  • (2023)The Multimodal Sentiment Analysis in Car Reviews (MuSe-CaR) Dataset: Collection, Insights and ImprovementsIEEE Transactions on Affective Computing10.1109/TAFFC.2021.309700214:2(1334-1350)Online publication date: 1-Apr-2023
  • (2022)A Review of AI Cloud and Edge Sensors, Methods, and Applications for the Recognition of Emotional, Affective and Physiological StatesSensors10.3390/s2220782422:20(7824)Online publication date: 14-Oct-2022
  • (2022)An Estimation of Online Video User Engagement From Features of Time- and Value-Continuous, Dimensional EmotionsFrontiers in Computer Science10.3389/fcomp.2022.7731544Online publication date: 23-Mar-2022
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MM '20: Proceedings of the 28th ACM International Conference on Multimedia
October 2020
4889 pages
ISBN:9781450379885
DOI:10.1145/3394171
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 October 2020

Check for updates

Author Tags

  1. affective computing
  2. multimodal fusion
  3. multimodal sentiment analysis
  4. user-generated data

Qualifiers

  • Abstract

Conference

MM '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)3
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2023)The Multimodal Sentiment Analysis in Car Reviews (MuSe-CaR) Dataset: Collection, Insights and ImprovementsIEEE Transactions on Affective Computing10.1109/TAFFC.2021.309700214:2(1334-1350)Online publication date: 1-Apr-2023
  • (2022)A Review of AI Cloud and Edge Sensors, Methods, and Applications for the Recognition of Emotional, Affective and Physiological StatesSensors10.3390/s2220782422:20(7824)Online publication date: 14-Oct-2022
  • (2022)An Estimation of Online Video User Engagement From Features of Time- and Value-Continuous, Dimensional EmotionsFrontiers in Computer Science10.3389/fcomp.2022.7731544Online publication date: 23-Mar-2022
  • (2022)A survey on sentiment analysis methods, applications, and challengesArtificial Intelligence Review10.1007/s10462-022-10144-155:7(5731-5780)Online publication date: 7-Feb-2022
  • (2021)Multimodal Fusion Strategies for Physiological-emotion AnalysisProceedings of the 2nd on Multimodal Sentiment Analysis Challenge10.1145/3475957.3484452(43-50)Online publication date: 24-Oct-2021
  • (2021)MuSe-Toolbox: The Multimodal Sentiment Analysis Continuous Annotation Fusion and Discrete Class Transformation ToolboxProceedings of the 2nd on Multimodal Sentiment Analysis Challenge10.1145/3475957.3484451(75-82)Online publication date: 24-Oct-2021
  • (2021)The MuSe 2021 Multimodal Sentiment Analysis ChallengeProceedings of the 2nd on Multimodal Sentiment Analysis Challenge10.1145/3475957.3484450(5-14)Online publication date: 24-Oct-2021
  • (2021)MuSe 2021 ChallengeProceedings of the 29th ACM International Conference on Multimedia10.1145/3474085.3478582(5706-5707)Online publication date: 17-Oct-2021
  • (2021)Temporal Graph Convolutional Network for Multimodal Sentiment AnalysisProceedings of the 2021 International Conference on Multimodal Interaction10.1145/3462244.3479939(239-247)Online publication date: 18-Oct-2021
  • (2021)Sentiment Analysis and Topic Recognition in Video TranscriptionsIEEE Intelligent Systems10.1109/MIS.2021.306220036:2(88-95)Online publication date: 1-Mar-2021

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media