skip to main content
10.1145/3123266.3123352acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Exploring Domain Knowledge for Affective Video Content Analyses

Published: 19 October 2017 Publication History

Abstract

The well-established film grammar is often used to change visual and audio elements of videos to invoke audiences' emotional experience. Such film grammar, referred to as domain knowledge, is crucial for affective video content analyses, but has not been thoroughly explored yet. In this paper, we propose a novel method to analyze video affective content through exploring domain knowledge. Specifically, take visual elements as an example, we first infer probabilistic dependencies between visual elements and emotions from the summarized film grammar. Then, we transfer the domain knowledge as constraints, and formulate affective video content analyses as a constrained optimization problem. Experiments on the LIRIS-ACCEDE database and the DEAP database demonstrate that the proposed affective content analyses method can successfully leverage well-established film grammar for better emotion classification from video content.

References

[1]
Esra Acar, Frank Hopfgartner, and Sahin Albayrak. 2016. A comprehensive study on mid-level representation and ensemble learning for emotional analysis of video material. Multimedia Tools and Applications (2016), 1--29.
[2]
Brett Adams, Chitra Dorai, and Svetha Venkatesh. 2002. Toward automatic extraction of expressive elements from motion pictures: tempo. IEEE Transactions on Multimedia Vol. 4, 4 (2002), 472--481.
[3]
Sutjipto Arifin and Peter YK Cheung. 2007. A Novel Probabilistic Approach to Modeling the Pleasure-Arousal-Dominance Content of the Video based on" Working Memory". Semantic Computing, 2007. ICSC 2007. International Conference on. IEEE, 147--154.
[4]
Yoann Baveye, Emmanuel Dellandrea, Christel Chamaret, and Liming Chen. 2015. Liris-accede: A video database for affective content analysis. IEEE Transactions on Affective Computing Vol. 6, 1 (2015), 43--55.
[5]
David Bordwell, Kristin Thompson, and Jeremy Ashton. 1997. Film art: An introduction. Vol. Vol. 7. McGraw-Hill New York.
[6]
Luca Canini, Sergio Benini, and Riccardo Leonardi. 2013. Affective recommendation of movies based on selected connotative features. IEEE Transactions on Circuits and Systems for Video Technology, Vol. 23, 4 (2013), 636--647.
[7]
Luca Canini, Sergio Benini, Pierangelo Migliorati, and Riccardo Leonardi. 2009. Emotional identity of movies. In Image Processing (ICIP), 2009 16th IEEE International Conference on. IEEE, 1821--1824.
[8]
Rupayan Chakraborty, Avinash Kumar Maurya, Meghna Pandharipande, Ehtesham Hassan, Hiranmay Ghosh, and Sunil Kumar Kopparapu. 2015. TCS-ILAB-MediaEval 2015: Affective Impact of Movies and Violent Scene Detection MediaEval.
[9]
Yue Cui, Jesse S Jin, Shiliang Zhang, Suhuai Luo, and Qi Tian. 2010. Music video affective understanding using feature importance analysis Proceedings of the ACM International Conference on Image and Video Retrieval. ACM, 213--219.
[10]
Yue Cui, Suhuai Luo, Qi Tian, Shiliang Zhang, Yu Peng, Lei Jiang, and Jesse S Jin. 2013. Mutual information-based emotion recognition. The Era of Interactive Media. Springer, 471--479.
[11]
Qi Dai, Rui-Wei Zhao, Zuxuan Wu, Xi Wang, Zichen Gu, Wenhai Wu, and Yu-Gang Jiang. 2015. Fudan-Huawei at MediaEval 2015: Detecting Violent Scenes and Affective Impact in Movies with Deep Learning. In MediaEval.
[12]
Emmanuel Dellandréa, Liming Chen, Yoann Baveye, Mats Sjöberg, Christel Chamaret, and ECD Lyon. 2016. The mediaeval 2016 emotional impact of movies task Proc. of the MediaEval 2016 Workshop, Hilversum, Netherlands.
[13]
Alan Hanjalic and Li-Qun Xu. 2005. Affective video content representation and modeling. IEEE transactions on multimedia Vol. 7, 1 (2005), 143--154.
[14]
Greg Keast. 2014. Shot Psychology: The Filmmaker's Guide For Enhancing Emotion And Meaning. Kahala Press.
[15]
Vu Lam, Sang Phan Le, Duy-Dinh Le, Shin'ichi Satoh, and Duc Anh Duong. 2015. NII-UIT at MediaEval 2015 Affective Impact of Movies Task MediaEval.
[16]
Ionut Mironica, Bogdan Ionescu, Mats Sjöberg, Markus Schedl, and Marcin Skowron. 2015. RFA at MediaEval 2015 Affective Impact of Movies Task: A Multimodal Approach MediaEval.
[17]
Sirko Molau, Michael Pitz, Ralf Schluter, and Hermann Ney. 2001. Computing mel-frequency cepstral coefficients on the power spectrum Acoustics, Speech, and Signal Processing, 2001. Proceedings.(ICASSP'01). 2001 IEEE International Conference on, Vol. Vol. 1. IEEE, 73--76.
[18]
Lei Pang, Shiai Zhu, and Chong-Wah Ngo. 2015. Deep multimodal learning for affective analysis and retrieval. IEEE Transactions on Multimedia Vol. 17, 11 (2015), 2008--2020.
[19]
Zeeshan Rasheed, Yaser Sheikh, and Mubarak Shah. 2005 a. On the use of computable features for film classification. IEEE Transactions on Circuits and Systems for Video Technology, Vol. 15, 1 (2005), 52--64.
[20]
Zeeshan Rasheed, Yaser Sheikh, and Mubarak Shah. 2005 b. On the use of computable features for film classification. IEEE Transactions on Circuits and Systems for Video Technology, Vol. 15, 1 (2005), 52--64.
[21]
Omar Seddati, Emre Kulah, Gueorgui Pironkov, Stéphane Dupont, Säd Mahmoudi, and Thierry Dutoit. 2015. UMons at MediaEval 2015 Affective Impact of Movies Task including Violent Scenes Detection MediaEval.
[22]
Chen Shiyu, Wang Shangfei, Wu Chongliang, Gao Zhen, Shi Xiaoxiao, and Ji Qiang. 2016. Implicit Hybrid Video Emotion Tagging by Integrating Video Content and Users? Multiple Physiological Responses. In International Conference on Pattern Recognition.
[23]
Mats Sjöberg, Yoann Baveye, Hanli Wang, Vu Lam Quang, Bogdan Ionescu, Emmanuel Dellandréa, Markus Schedl, Claire-Hélène Demarty, and Liming Chen. 2015. The MediaEval 2015 Affective Impact of Movies Task. MediaEval.
[24]
Greg M Smith. 2003. Film structure and the emotion system. Cambridge University Press.
[25]
Mohammad Soleymani, Jeroen Lichtenauer, Thierry Pun, and Maja Pantic. 2012. A multimodal database for affect recognition and implicit tagging. IEEE Transactions on Affective Computing Vol. 3, 1 (2012), 42--55.
[26]
Kai Sun and Junqing Yu. 2007. Video affective content representation and recognition using video affective tree and hidden markov models. In International Conference on Affective Computing and Intelligent Interaction. Springer, 594--605.
[27]
George Trigeorgis, Eduardo Coutinho, Fabien Ringeval, Erik Marchi, Stefanos Zafeiriou, and Björn W Schuller. 2015. The ICL-TUM-PASSAU Approach for the MediaEval 2015" Affective Impact of Movies" Task MediaEval.
[28]
Patricia Valdez and Albert Mehrabian. 1994. Effects of color on emotions. Journal of experimental psychology: General, Vol. 123, 4 (1994), 394.
[29]
Marin Vlastelica, Sergey Hayrapetyan, Makarand Tapaswi, and Rainer Stiefelhagen. 2015. KIT at MediaEval 2015-evaluating visual cues for affective impact of movies task. (2015).
[30]
Hee Lin Wang and Loong-Fah Cheong. 2006. Affective understanding in film. IEEE Transactions on circuits and systems for video technology, Vol. 16, 6 (2006), 689--704.
[31]
Shangfei Wang and Qiang Ji. 2015. Video affective content analysis: a survey of state-of-the-art methods. IEEE Transactions on Affective Computing Vol. 6, 4 (2015), 410--430.
[32]
Saowaluk C Watanapa, Bundit Thipakorn, and Nipon Charoenkitkarn. 2008. A Sieving ANN for Emotion-Based Movie Clip Classification. IEICE Transactions on Information and Systems, Vol. 91, 5 (2008), 1562--1572.
[33]
Cheng-Yu Wei, Nevenka Dimitrova, and Shih-Fu Chang. 2004. Color-mood analysis of films based on syntactic and psychological models Multimedia and Expo, 2004. ICME'04. 2004 IEEE International Conference on, Vol. Vol. 2. IEEE, 831--834.
[34]
Min Xu, Jesse S Jin, Suhuai Luo, and Lingyu Duan. 2008. Hierarchical movie affective content analysis based on arousal and valence features Proceedings of the 16th ACM international conference on Multimedia. ACM, 677--680.
[35]
Min Xu, Changsheng Xu, Xiangjian He, Jesse S Jin, Suhuai Luo, and Yong Rui. 2013. Hierarchical affective content analysis in arousal and valence dimensions. Signal Processing, Vol. 93, 8 (2013), 2140--2150.
[36]
Ashkan Yazdani, Krista Kappeler, and Touradj Ebrahimi. 2011. Affective content analysis of music video clips. Proceedings of the 1st international ACM workshop on Music information retrieval with user-centered and multimodal strategies. ACM, 7--12.
[37]
Ashkan Yazdani, Evangelos Skodras, Nikolaos Fakotakis, and Touradj Ebrahimi. 2013. Multimedia content analysis for emotional characterization of music video clips. EURASIP Journal on Image and Video Processing, Vol. 2013, 1 (2013), 26.
[38]
Yun Yi, Hanli Wang, Bowen Zhang, and Jian Yu. 2015. MIC-TJU in MediaEval 2015 Affective Impact of Movies Task MediaEval.
[39]
Herbert Zettl. 2013. Sight, sound, motion: Applied media aesthetics. Cengage Learning.
[40]
Shiliang Zhang, Qingming Huang, Shuqiang Jiang, Wen Gao, and Qi Tian. 2010. Affective visualization and retrieval for music video. IEEE Transactions on Multimedia Vol. 12, 6 (2010), 510--522.
[41]
Shiliang Zhang, Qi Tian, Qingming Huang, Wen Gao, and Shipeng Li. 2009. Utilizing affective analysis for efficient movie browsing Image Processing (ICIP), 2009 16th IEEE International Conference on. IEEE, 1853--1856.
[42]
Shiliang Zhang, Qi Tian, Shuqiang Jiang, Qingming Huang, and Wen Gao. 2008. Affective MTV analysis based on arousal and valence features Multimedia and Expo, 2008 IEEE International Conference on. IEEE, 1369--1372.

Cited By

View all
  • (2023)Stepwise Fusion Transformer for Affective Video Content AnalysisInternational Conference on Neural Computing for Advanced Applications10.1007/978-981-99-5847-4_27(375-386)Online publication date: 30-Aug-2023
  • (2022)Affective Video Content Analysis via Multimodal Deep Quality Embedding NetworkIEEE Transactions on Affective Computing10.1109/TAFFC.2020.300411413:3(1401-1415)Online publication date: 1-Jul-2022
  • (2021)Joint Optimization in Edge-Cloud Continuum for Federated Unsupervised Person Re-identificationProceedings of the 29th ACM International Conference on Multimedia10.1145/3474085.3475182(433-441)Online publication date: 17-Oct-2021
  • Show More Cited By

Index Terms

  1. Exploring Domain Knowledge for Affective Video Content Analyses

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MM '17: Proceedings of the 25th ACM international conference on Multimedia
    October 2017
    2028 pages
    ISBN:9781450349062
    DOI:10.1145/3123266
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 October 2017

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. affective video content analyses
    2. domain knowledge
    3. visual elements

    Qualifiers

    • Research-article

    Funding Sources

    • the project from Anhui Science and Technology Agency
    • the National Science Foundation of China

    Conference

    MM '17
    Sponsor:
    MM '17: ACM Multimedia Conference
    October 23 - 27, 2017
    California, Mountain View, USA

    Acceptance Rates

    MM '17 Paper Acceptance Rate 189 of 684 submissions, 28%;
    Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)9
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 01 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Stepwise Fusion Transformer for Affective Video Content AnalysisInternational Conference on Neural Computing for Advanced Applications10.1007/978-981-99-5847-4_27(375-386)Online publication date: 30-Aug-2023
    • (2022)Affective Video Content Analysis via Multimodal Deep Quality Embedding NetworkIEEE Transactions on Affective Computing10.1109/TAFFC.2020.300411413:3(1401-1415)Online publication date: 1-Jul-2022
    • (2021)Joint Optimization in Edge-Cloud Continuum for Federated Unsupervised Person Re-identificationProceedings of the 29th ACM International Conference on Multimedia10.1145/3474085.3475182(433-441)Online publication date: 17-Oct-2021
    • (2021)Multimodal Local-Global Attention Network for Affective Video Content AnalysisIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2020.301488931:5(1901-1914)Online publication date: May-2021
    • (2021)Video Affective Content Analysis by Exploring Domain KnowledgeIEEE Transactions on Affective Computing10.1109/TAFFC.2019.291237712:4(1002-1017)Online publication date: 1-Oct-2021
    • (2020)Performance Optimization of Federated Person Re-identification via Benchmark AnalysisProceedings of the 28th ACM International Conference on Multimedia10.1145/3394171.3413814(955-963)Online publication date: 12-Oct-2020
    • (2020)Affective Video Content Analysis With Adaptive Fusion Recurrent NetworkIEEE Transactions on Multimedia10.1109/TMM.2019.295530022:9(2454-2466)Online publication date: Sep-2020
    • (2019)Affective Video Content Analyses by Using Cross-Modal Embedding Learning Features2019 IEEE International Conference on Multimedia and Expo (ICME)10.1109/ICME.2019.00150(844-849)Online publication date: Jul-2019

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media