Skip to main content

Going Beyond Closed Sets: A Multimodal Perspective for Video Emotion Analysis

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14430))

Included in the following conference series:

  • 388 Accesses

Abstract

Emotion analysis plays a crucial role in understanding video content. Existing studies often approach it as a closed set classification task, which overlooks the important fact that the emotional experiences of humans are so complex and difficult to be adequately expressed in a limited number of categories. In this paper, we propose MM-VEMA, a novel MultiModal perspective for Video EMotion Analysis. We formulate the task as a crossmodal matching problem within a joint multimodal space of videos and emotional experiences (e.g. emotional words, phrases, sentences). By finding experiences that closely match each video in this space, we can derive the emotions evoked by the video in a more comprehensive manner. To construct this joint multimodal space, we introduce an efficient yet effective method that manipulates the multimodal space of a pre-trained vision-language model using a small set of emotional prompts. We conduct experiments and analyses to demonstrate the effectiveness of our methods. The results show that videos and emotional experiences are well aligned in the joint multimodal space. Our model also achieves state-of-the-art performance on three public datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ali, A.R., et al.: High-level concepts for affective understanding of images. In: WACV, pp. 679–687. IEEE (2017)

    Google Scholar 

  2. Baveye, Y., et al.: LIRIS-ACCEDE: a video database for affective content analysis. TAC 6(1), 43–55 (2015)

    Google Scholar 

  3. Bertasius, G., et al.: Is space-time attention all you need for video understanding? In: ICML, vol. 2, p. 4 (2021)

    Google Scholar 

  4. Borth, D., et al.: Large-scale visual sentiment ontology and detectors using adjective noun pairs. In: ACM MM, pp. 223–232 (2013)

    Google Scholar 

  5. Cowen, A.S., et al.: Self-report captures 27 distinct categories of emotion bridged by continuous gradients. PNAS 114(38), E7900–E7909 (2017)

    Article  Google Scholar 

  6. Deng, S., et al.: Simple but powerful, a language-supervised method for image emotion classification. TAC (2022)

    Google Scholar 

  7. Ekman, P.: An argument for basic emotions. Cogn. Emot. 6(3–4), 169–200 (1992)

    Article  Google Scholar 

  8. Hanjalic, A.: Extracting moods from pictures and sounds: towards truly personalized tv. SPM 23(2), 90–100 (2006)

    Google Scholar 

  9. Jiang, Y.G., et al.: Predicting emotions in user-generated videos. In: AAAI, vol. 28 (2014)

    Google Scholar 

  10. Ju, C., et al.: Prompting visual-language models for efficient video understanding. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13695, pp. 105–124. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19833-5_7

    Chapter  Google Scholar 

  11. Lee, J., et al.: Context-aware emotion recognition networks. In: ICCV, pp. 10143–10152 (2019)

    Google Scholar 

  12. Li, Y., et al.: Decoupled multimodal distilling for emotion recognition. In: CVPR, pp. 6631–6640 (2023)

    Google Scholar 

  13. Lin, T.Y., et al.: Focal loss for dense object detection. In: ICCV, pp. 2980–2988 (2017)

    Google Scholar 

  14. Van der Maaten, L., et al.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11) (2008)

    Google Scholar 

  15. Machajdik, J., et al.: Affective image classification using features inspired by psychology and art theory. In: ACM MM, pp. 83–92 (2010)

    Google Scholar 

  16. Mazeika, M., et al.: How would the viewer feel? Estimating wellbeing from video scenarios. arXiv preprint arXiv:2210.10039 (2022)

  17. Pan, J., et al.: Representation learning through multimodal attention and time-sync comments for affective video content analysis. In: ACM MM, pp. 42–50 (2022)

    Google Scholar 

  18. Plutchik, R.: Emotions: a general psychoevolutionary theory. Approaches Emot. 1984(197–219), 2–4 (1984)

    Google Scholar 

  19. Qiu, H., et al.: Dual focus attention network for video emotion recognition. In: ICME, pp. 1–6. IEEE (2020)

    Google Scholar 

  20. Radford, A., et al.: Learning transferable visual models from natural language supervision. In: ICML, pp. 8748–8763. PMLR (2021)

    Google Scholar 

  21. Sharir, G., et al.: An image is worth 16\(\times \)16 words, what is a video worth? arXiv preprint arXiv:2103.13915 (2021)

  22. Stray, J., et al.: What are you optimizing for? Aligning recommender systems with human values. arXiv preprint arXiv:2107.10939 (2021)

  23. Tong, Z., et al.: VideoMAE: masked autoencoders are data-efficient learners for self-supervised video pre-training. arXiv preprint arXiv:2203.12602 (2022)

  24. Tran, D., et al.: A closer look at spatiotemporal convolutions for action recognition. In: CVPR, pp. 6450–6459 (2018)

    Google Scholar 

  25. Vaswani, A., et al.: Attention is all you need. In: NeurIPS, vol. 30 (2017)

    Google Scholar 

  26. Wang, L., et al.: Temporal segment networks for action recognition in videos. TPAMI 41(11), 2740–2755 (2018)

    Article  Google Scholar 

  27. Wang, M., et al.: ActionCLIP: a new paradigm for video action recognition. arXiv preprint arXiv:2109.08472 (2021)

  28. Xu, B., et al.: Heterogeneous knowledge transfer in video emotion recognition, attribution and summarization. TAC 9(2), 255–270 (2016)

    MathSciNet  Google Scholar 

  29. Xu, B., et al.: Video emotion recognition with concept selection. In: ICME, pp. 406–411. IEEE (2019)

    Google Scholar 

  30. Yanulevskaya, V., et al.: Emotional valence categorization using holistic image features. In: ICIP, pp. 101–104. IEEE (2008)

    Google Scholar 

  31. Yu, W., et al.: CH-SIMS: a Chinese multimodal sentiment analysis dataset with fine-grained annotation of modality. In: ACL, pp. 3718–3727 (2020)

    Google Scholar 

  32. Yu, W., et al.: Learning modality-specific representations with self-supervised multi-task learning for multimodal sentiment analysis. In: AAAI, vol. 35, pp. 10790–10797 (2021)

    Google Scholar 

  33. Zhang, H., et al.: Recognition of emotions in user-generated videos through frame-level adaptation and emotion intensity learning. TMM (2021)

    Google Scholar 

  34. Zhang, Z., et al.: Temporal sentiment localization: listen and look in untrimmed videos. In: ACM MM, pp. 199–208 (2022)

    Google Scholar 

  35. Zhang, Z., et al.: Weakly supervised video emotion detection and prediction via cross-modal temporal erasing network. In: CVPR, pp. 18888–18897 (2023)

    Google Scholar 

  36. Zhao, S., et al.: An end-to-end visual-audio attention network for emotion recognition in user-generated videos. In: AAAI, vol. 34, pp. 303–311 (2020)

    Google Scholar 

  37. Zhao, S., et al.: Affective image content analysis: two decades review and new perspectives. TPAMI 44(10), 6729–6751 (2021)

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China (21XNLG28), National Natural Science Foundation of China (No. 62276268) and Huawei Technology. We acknowledge the anonymous reviewers for their helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ruihua Song .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1074 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pu, H. et al. (2024). Going Beyond Closed Sets: A Multimodal Perspective for Video Emotion Analysis. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14430. Springer, Singapore. https://doi.org/10.1007/978-981-99-8537-1_19

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8537-1_19

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8536-4

  • Online ISBN: 978-981-99-8537-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics