Abstract
Multi-sensory data, which exhibits complex relationships among modalities and temporal interactions, contains richer and more complex emotional representations for sentiment analysis. Yet, the effective integration of modalities remains a major challenge in the Multimodal Sentiment Analysis (MSA) task. We present a generalized model named Synesthesia Transformer with Contrastive learning (STC), which applies a synesthesia attention module enabling other modalities to guide the training of the input modality. It obtains a more natural and effective fusion and achieves competitive results on two widely used benchmarks CMU-MOSEI and CMU-MOSI.
This work is supported by the National Natural Science Foundation of China (No. 61832001) and Sichuan Science and Technology Program (No. 2021JDRC0073).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Fedotov, D.: Contextual time-continuous emotion recognition based on multimodal data, Ph. D. thesis, University of Ulm, Germany (2022)
He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.B.: Momentum contrast for unsupervised visual representation learning. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, pp. 9726–9735 (2020)
Heo, S., Lee, W., Lee, J.: mcBERT: momentum contrastive learning with BERT for zero-shot slot filling. CoRR abs/2203.12940 (2022)
Huddar, M.G., Sannakki, S.S., Rajpurohit, V.S.: Attention-based multi-modal sentiment analysis and emotion detection in conversation using RNN. Int. J. Interact. Multim. Artif. Intell. 6(6), 112–121 (2021)
Jang, H., Choi, H., Yi, Y., Shin, J.: Adiabatic persistent contrastive divergence learning. In: 2017 IEEE International Symposium on Information Theory, ISIT 2017, pp. 3005–3009 (2017)
Jenckel, M.: Sequence learning for ocr in unsupervised training cases, Ph. D. thesis, Kaiserslautern University of Technology, Germany (2022)
Ji, X.: Unsupervised learning and continual learning in neural networks, Ph. D. thesis, University of Oxford, UK (2021)
Kann, K., Monsalve-Mercado, M.M.: Coloring the black box: what synesthesia tells us about character embeddings. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, pp. 2673–2685 (2021)
Liu, Z., Shen, Y., Lakshminarasimhan, V.B., Liang, P.P., Zadeh, A., Morency, L.: Efficient low-rank multimodal fusion with modality-specific factors. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Volume 1: Long Papers, pp. 2247–2256 (2018)
Mai, S., Zeng, Y., Zheng, S., Hu, H.: Hybrid contrastive learning of tri-modal representation for multimodal sentiment analysis. IEEE Trans. Affect. Comput. (2022). https://doi.org/10.1109/TAFFC.2022.3172360
Melanchthon, D.M.: Unimodal feature-level improvement on multimodal CMU-MOSEI dataset: uncorrelated and convolved feature sets. Proces. del Leng. Natural 67, 69–81 (2021)
Pham, H., Liang, P.P., Manzini, T., Morency, L., Póczos, B.: Found in translation: learning robust joint representations by cyclic translations between modalities. In: The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, pp. 6892–6899 (2019)
Qi, Q., Lin, L., Zhang, R., Xue, C.: MEDT: using multimodal encoding-decoding network as in transformer for multimodal sentiment analysis. IEEE Access 10, 28750–28759 (2022)
Stappen, L., et al.: The muse 2021 multimodal sentiment analysis challenge: Sentiment, emotion, physiological-emotion, and stress. In: MuSe 2021: Proceedings of the 2nd on Multimodal Sentiment Analysis Challenge, pp. 5–14 (2021)
Tsai, Y.H., Bai, S., Liang, P.P., Kolter, J.Z., Morency, L., Salakhutdinov, R.: Multimodal transformer for unaligned multimodal language sequences. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Volume 1: Long Papers, pp. 6558–6569 (2019)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, pp. 5998–6008 (2017)
Wang, Y., Shen, Y., Liu, Z., Liang, P.P., Zadeh, A., Morency, L.: Words can shift: dynamically adjusting word representations using nonverbal behaviors. In: The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, pp. 7216–7223 (2019)
Yang, B., Shao, B., Wu, L., Lin, X.: Multimodal sentiment analysis with unidirectional modality translation. Neurocomputing 467, 130–137 (2022)
Yang, N., Wei, F., Jiao, B., Jiang, D., Yang, L.: xMoCo: cross momentum contrastive learning for open-domain question answering. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), pp. 6120–6129 (2021)
Zadeh, A., Chen, M., Poria, S., Cambria, E., Morency, L.: Tensor fusion network for multimodal sentiment analysis. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, pp. 1103–1114 (2017)
Zadeh, A., Liang, P.P., Mazumder, N., Poria, S., Cambria, E., Morency, L.: Memory fusion network for multi-view sequential learning. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), pp. 5634–5641 (2018)
Zadeh, A., Zellers, R., Pincus, E., Morency, L.: MOSI: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. CoRR abs/1606.06259 (2016)
Zheng, J., Zhang, S., Wang, X., Zeng, Z.: Multimodal representations learning based on mutual information maximization and minimization and identity embedding for multimodal sentiment analysis. CoRR abs/2201.03969 (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Sun, Z., Chen, F., Shao, J. (2023). Synesthesia Transformer with Contrastive Multimodal Learning. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Lecture Notes in Computer Science, vol 13623. Springer, Cham. https://doi.org/10.1007/978-3-031-30105-6_36
Download citation
DOI: https://doi.org/10.1007/978-3-031-30105-6_36
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-30104-9
Online ISBN: 978-3-031-30105-6
eBook Packages: Computer ScienceComputer Science (R0)