Abstract
Emotion analysis plays a crucial role in understanding video content. Existing studies often approach it as a closed set classification task, which overlooks the important fact that the emotional experiences of humans are so complex and difficult to be adequately expressed in a limited number of categories. In this paper, we propose MM-VEMA, a novel MultiModal perspective for Video EMotion Analysis. We formulate the task as a crossmodal matching problem within a joint multimodal space of videos and emotional experiences (e.g. emotional words, phrases, sentences). By finding experiences that closely match each video in this space, we can derive the emotions evoked by the video in a more comprehensive manner. To construct this joint multimodal space, we introduce an efficient yet effective method that manipulates the multimodal space of a pre-trained vision-language model using a small set of emotional prompts. We conduct experiments and analyses to demonstrate the effectiveness of our methods. The results show that videos and emotional experiences are well aligned in the joint multimodal space. Our model also achieves state-of-the-art performance on three public datasets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ali, A.R., et al.: High-level concepts for affective understanding of images. In: WACV, pp. 679–687. IEEE (2017)
Baveye, Y., et al.: LIRIS-ACCEDE: a video database for affective content analysis. TAC 6(1), 43–55 (2015)
Bertasius, G., et al.: Is space-time attention all you need for video understanding? In: ICML, vol. 2, p. 4 (2021)
Borth, D., et al.: Large-scale visual sentiment ontology and detectors using adjective noun pairs. In: ACM MM, pp. 223–232 (2013)
Cowen, A.S., et al.: Self-report captures 27 distinct categories of emotion bridged by continuous gradients. PNAS 114(38), E7900–E7909 (2017)
Deng, S., et al.: Simple but powerful, a language-supervised method for image emotion classification. TAC (2022)
Ekman, P.: An argument for basic emotions. Cogn. Emot. 6(3–4), 169–200 (1992)
Hanjalic, A.: Extracting moods from pictures and sounds: towards truly personalized tv. SPM 23(2), 90–100 (2006)
Jiang, Y.G., et al.: Predicting emotions in user-generated videos. In: AAAI, vol. 28 (2014)
Ju, C., et al.: Prompting visual-language models for efficient video understanding. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13695, pp. 105–124. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19833-5_7
Lee, J., et al.: Context-aware emotion recognition networks. In: ICCV, pp. 10143–10152 (2019)
Li, Y., et al.: Decoupled multimodal distilling for emotion recognition. In: CVPR, pp. 6631–6640 (2023)
Lin, T.Y., et al.: Focal loss for dense object detection. In: ICCV, pp. 2980–2988 (2017)
Van der Maaten, L., et al.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11) (2008)
Machajdik, J., et al.: Affective image classification using features inspired by psychology and art theory. In: ACM MM, pp. 83–92 (2010)
Mazeika, M., et al.: How would the viewer feel? Estimating wellbeing from video scenarios. arXiv preprint arXiv:2210.10039 (2022)
Pan, J., et al.: Representation learning through multimodal attention and time-sync comments for affective video content analysis. In: ACM MM, pp. 42–50 (2022)
Plutchik, R.: Emotions: a general psychoevolutionary theory. Approaches Emot. 1984(197–219), 2–4 (1984)
Qiu, H., et al.: Dual focus attention network for video emotion recognition. In: ICME, pp. 1–6. IEEE (2020)
Radford, A., et al.: Learning transferable visual models from natural language supervision. In: ICML, pp. 8748–8763. PMLR (2021)
Sharir, G., et al.: An image is worth 16\(\times \)16 words, what is a video worth? arXiv preprint arXiv:2103.13915 (2021)
Stray, J., et al.: What are you optimizing for? Aligning recommender systems with human values. arXiv preprint arXiv:2107.10939 (2021)
Tong, Z., et al.: VideoMAE: masked autoencoders are data-efficient learners for self-supervised video pre-training. arXiv preprint arXiv:2203.12602 (2022)
Tran, D., et al.: A closer look at spatiotemporal convolutions for action recognition. In: CVPR, pp. 6450–6459 (2018)
Vaswani, A., et al.: Attention is all you need. In: NeurIPS, vol. 30 (2017)
Wang, L., et al.: Temporal segment networks for action recognition in videos. TPAMI 41(11), 2740–2755 (2018)
Wang, M., et al.: ActionCLIP: a new paradigm for video action recognition. arXiv preprint arXiv:2109.08472 (2021)
Xu, B., et al.: Heterogeneous knowledge transfer in video emotion recognition, attribution and summarization. TAC 9(2), 255–270 (2016)
Xu, B., et al.: Video emotion recognition with concept selection. In: ICME, pp. 406–411. IEEE (2019)
Yanulevskaya, V., et al.: Emotional valence categorization using holistic image features. In: ICIP, pp. 101–104. IEEE (2008)
Yu, W., et al.: CH-SIMS: a Chinese multimodal sentiment analysis dataset with fine-grained annotation of modality. In: ACL, pp. 3718–3727 (2020)
Yu, W., et al.: Learning modality-specific representations with self-supervised multi-task learning for multimodal sentiment analysis. In: AAAI, vol. 35, pp. 10790–10797 (2021)
Zhang, H., et al.: Recognition of emotions in user-generated videos through frame-level adaptation and emotion intensity learning. TMM (2021)
Zhang, Z., et al.: Temporal sentiment localization: listen and look in untrimmed videos. In: ACM MM, pp. 199–208 (2022)
Zhang, Z., et al.: Weakly supervised video emotion detection and prediction via cross-modal temporal erasing network. In: CVPR, pp. 18888–18897 (2023)
Zhao, S., et al.: An end-to-end visual-audio attention network for emotion recognition in user-generated videos. In: AAAI, vol. 34, pp. 303–311 (2020)
Zhao, S., et al.: Affective image content analysis: two decades review and new perspectives. TPAMI 44(10), 6729–6751 (2021)
Acknowledgments
This work was supported by the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China (21XNLG28), National Natural Science Foundation of China (No. 62276268) and Huawei Technology. We acknowledge the anonymous reviewers for their helpful comments.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Pu, H. et al. (2024). Going Beyond Closed Sets: A Multimodal Perspective for Video Emotion Analysis. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14430. Springer, Singapore. https://doi.org/10.1007/978-981-99-8537-1_19
Download citation
DOI: https://doi.org/10.1007/978-981-99-8537-1_19
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8536-4
Online ISBN: 978-981-99-8537-1
eBook Packages: Computer ScienceComputer Science (R0)