skip to main content
10.1145/3573942.3573968acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaiprConference Proceedingsconference-collections
research-article

A Method with Universal Transformer for Multimodal Sentiment Analysis

Published: 16 May 2023 Publication History

Abstract

Multimodal sentiment analysis refers to the use of computers to analyze and identify the emotions that people want to express through the extracted multimodal sentiment features, and it plays a significant role in human-computer interaction and financial market prediction. Most existing approaches to multimodal sentiment analysis use contextual information for modeling, and while this modeling approach can effectively capture the contextual connections within modalities, the correlations between modalities are often overlooked, and the correlations between modalities are also critical to the final recognition results in multimodal sentiment analysis. Therefore, this paper proposes a multimodal sentiment analysis approach based on the universal transformer, a framework that uses the universal transformer to model the connections between multiple modalities while employing effective feature extraction methods to capture the contextual connections of individual modalities. We evaluated our proposed method on two benchmark datasets for multimodal sentiment analysis, CMU-MOSI and CMU-MOSEI, and the results outperformed other methods of the same type.

References

[1]
Yequan Wang, Aixin Sun, Jialong Han, Ying Liu and Xiaoyan Zhu. 2018. Sentiment analysis by capsules. Proceedings of the 2018 world wide web conference. 1165-1174.
[2]
Xinxi Wu, Songxiang Liu, Yuewen Cao, Xu Li, Jianwei Yu, Dongyan Dai, 2019. Speech emotion recognition using capsule networks. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, Brighton, UK, 6695-6699.
[3]
Haiyong Wang and Hongzhu Liang. 2020. Local occlusion facial expression recognition based on improved GAN. Computer Engineering and Application 56, 5 (2020), 141-146.
[4]
Naiming Yao, Qingpei Guo and Fengchun Qiao. 2018. Robust facial expression recognition based on Generative adversary Network. Acta Automatica Sinica 44, 5 (2018), 865-877.
[5]
Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit and Lukasz Kaiser. 2018. Universal transformers[J]. arXiv preprint arXiv:1807.03819, 2018.
[6]
Alex Krizhevsky, Ilya Sutskever and Geoffrey E. Hinton. 2017. ImageNet classification with deep convolutional neural networks. Communications of the ACM 60, 6 (2017), 84-90.
[7]
Yuhai Yu, Hongfei Lin, Jiana Meng and Zhehuan Zhao. 2016. Visual and textual sentiment analysis of a microblog using deep convolutional neural networks. Algorithms 9, 2 (2016), 0-11.
[8]
Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria and Louis-Philippe Analysis. 2017. Tensor fusion network for multimodal sentiment analysis. arXiv preprint arXiv:1707.07250(2017).
[9]
Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Mazumder, Amir Zadeh and Louis-Philippe Morency. 2017. Context-dependent sentiment analysis in user-generated videos. Proceedings of the 55th annual meeting of the association for computational linguistics. 873-883.
[10]
Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Rada Mihalcea, Alexander Gelbukh and Erik Cambria. 2019. Dialoguernn: An attentive rnn for emotion detection in conversations. Proceedings of the AAAI Conference on Artificial Intelligence. 6818-6825.
[11]
Zheng Lian, Bin Liu and Jianhua Tao. 2021. CTNet: Conversational transformer network for emotion recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing 29 (2021), 985-1000.
[12]
Jeffrey Pennington, Richard Socher and Christopher Manning. 2014. Glove: Global vectors for word representation. Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 1532-1543.
[13]
Shuiwang Ji, Wei Xu, Ming Yang and Kai Yu. 2012. 3D convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence 35, 1 (March 2012), 221-231.
[14]
Gilles Degottex, John Kane, Thomas Drugman, Tuomo Raitio and Stefan Scherer. 2014. COVAREP—A collaborative voice analysis repository for speech technologies. IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, Florence, Italy, 960-964.
[15]
Amir Zadeh, Rowan Zellers, Eli Pincus and Louis-Philippe Morency. 2016. Multimodal sentiment intensity analysis in videos: Facial gestures and verbal messages. IEEE Intelligent Systems 31, 6 (November 2016), 82-88.
[16]
Amir Zadeh, Paul Pu Liang, Jonathan Vanbriesen, Soujanya Poria, Edmund Tong, Erik Cambria, Minghai Chen, 2018. Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 2236-2246.
[17]
Deepanway Ghosal, Md Shad Akhtar, Dushyant Chauhan, Soujanya Poria, Asif Ekbal and Pushpak Bhattacharyya. 2018. Contextual inter-modal attention for multi-modal sentiment analysis. Proceedings of the 2018 conference on empirical methods in natural language processing. 3454-3466.
[18]
Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Mazumder, Amir Zadeh and Louis-Philippe Morency. 2017. Context-dependent sentiment analysis in user-generated videos. Proceedings of the 55th annual meeting of the association for computational linguistics. 873-883.
[19]
Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Mazumder, Amir Zadeh and Louis-Philippe Morency. 2017. Multi-level multiple attentions for contextual multimodal sentiment analysis. IEEE International Conference on Data Mining (ICDM). IEEE, New Orleans, LA, USA, 1033-1038.
[20]
Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltrušaitis, Amir Zadeh and Louis-Philippe Morency. 2017. Multimodal sentiment analysis with word-level fusion and reinforcement learning. Proceedings of the 19th ACM International Conference on Multimodal Interaction. 163-171.
[21]
Amir Zadeh, Paul Pu Liang, Soujanya Poria, Prateek Vij, Erik Cambria and Louis-Philippe Morency. 2018. Multi-attention recurrent network for human communication comprehension. Thirty-Second AAAI Conference on Artificial Intelligence. 5642-5649.
[22]
Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanva Poria, Erik Cambria and Louis-Philippe Morency. 2018. Memory fusion network for multi-view sequential learning. Proceedings of the AAAI Conference on Artificial Intelligence. 5634-5641.
[23]
Zilong Wang, Zhaohong Wan and Xiaojun Wan. 2020. Transmodality: An end2end fusion method with transformer for multimodal sentiment analysis. Proceedings of The Web Conference 2020. Association for Computing Machinery, New York, NY, United States, 2514-2520.
[24]
Mahesh G. Huddar, Sanjeev S. Sannakki and Vijay S. Rajpurohit. 2021. Attention-based Multi-modal Sentiment Analysis and Emotion Detection in Conversation using RNN. International Journal of Interactive Multimedia & Artificial Intelligence 6, 6(December 2021), 112-121.

Index Terms

  1. A Method with Universal Transformer for Multimodal Sentiment Analysis

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    AIPR '22: Proceedings of the 2022 5th International Conference on Artificial Intelligence and Pattern Recognition
    September 2022
    1221 pages
    ISBN:9781450396899
    DOI:10.1145/3573942
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 16 May 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Emotional Features
    2. Feature Extraction
    3. Multimodal sentiment analysis
    4. Universal transformer

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    AIPR 2022

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 69
      Total Downloads
    • Downloads (Last 12 months)18
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 01 Mar 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media