skip to main content
10.1145/3498851.3498962acmconferencesArticle/Chapter ViewAbstractPublication PageswiConference Proceedingsconference-collections
research-article

Affect-Predictive Models: Predicting Emotional Responses Directly to Stimuli

Published:11 April 2022Publication History

ABSTRACT

Historically, in the field of Affective Computing the research focus was on recognizing emotions expressed by humans. In our work, we show that it is possible to predict emotional reaction as central tendency directly to a stimulus, prior to its actual exposure to any human. This is achieved by training new Affect-Predictive machine learning models, which leverage the large amount of weak emotional signals of aggregated and fully anonymized reactions of online users to a vast variety of textual and visual stimuli. Based on our Affect-Predictive computer vision model we (a) set a new benchmark to evaluate its predictive power on an open-access affective image set, (b) generate affective saliency maps and (c) discuss a few instances of peculiar visual patterns learned by the model. We theorize that Affect-Predictive models can be used to learn implicit patterns allowing AI agents to see the world and react in a more human-like way: imagine an autonomous vehicle that slows down automatically when detecting something highly surprising or negative. Using our Affect-Predictive natural language model we demonstrate that it is possible to predict general emotional response to a piece of text from the reader perspective and how it can be used on practice to improve social listening. We conclude with a discussion on the broader implication of the Affect-Predictive models: human emotional reactions can be treated as natural encoders of multimodal stimuli capturing just enough semantics to allow instantaneous decision-making; the ability to automatically predict such reactions directly to stimuli opens up a lot of new opportunities in the field of Affective Computing.

References

  1. Pooyan Balouchian, Marjaneh Safaei, and Hassan Foroosh. 2019. LUCFER: A large-scale context-sensitive image dataset for deep learning of visual emotions. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 1645–1654.Google ScholarGoogle ScholarCross RefCross Ref
  2. Lisa Feldman Barrett, Ralph Adolphs, Stacy Marsella, Aleix M Martinez, and Seth D Pollak. 2019. Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological science in the public interest 20, 1 (2019), 1–68.Google ScholarGoogle Scholar
  3. Lisa Feldman Barrett and Eliza Bliss-Moreau. 2009. Affect as a psychological primitive. Advances in experimental social psychology 41 (2009), 167–218.Google ScholarGoogle Scholar
  4. L Bing. 2012. Sentiment Analysis and Opinion Mining (Synthesis Lectures on Human Language Technologies). University of Illinois: Chicago, IL, USA(2012).Google ScholarGoogle Scholar
  5. Rafael A Calvo and Sidney D’Mello. 2010. Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transactions on affective computing 1, 1 (2010), 18–37.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Elise S Dan-Glauser and Klaus R Scherer. 2011. The Geneva affective picture database (GAPED): a new 730-picture database focusing on valence and normative significance. Behavior research methods 43, 2 (2011), 468–477.Google ScholarGoogle Scholar
  7. Seth Duncan and Lisa Feldman Barrett. 2007. Affect is a form of cognition: A neurobiological analysis. Cognition and emotion 21, 6 (2007), 1184–1211.Google ScholarGoogle Scholar
  8. Shaojing Fan, Zhiqi Shen, Ming Jiang, Bryan L Koenig, Juan Xu, Mohan S Kankanhalli, and Qi Zhao. 2018. Emotional attention: A study of image sentiment and visual attention. In Proceedings of the IEEE Conference on computer vision and pattern recognition. 7521–7531.Google ScholarGoogle ScholarCross RefCross Ref
  9. James J Gibson. 1977. The theory of affordances. Hilldale, USA 1, 2 (1977), 67–82.Google ScholarGoogle Scholar
  10. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. arXiv preprint arXiv:1802.06893(2018).Google ScholarGoogle Scholar
  11. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Geoff Hollis and Chris Westbury. 2016. The principals of meaning: Extracting semantic dimensions from co-occurrence models of semantics. Psychonomic bulletin & review 23, 6 (2016), 1744–1756.Google ScholarGoogle Scholar
  13. Ming Jiang, Shengsheng Huang, Juanyong Duan, and Qi Zhao. 2015. Salicon: Saliency in context. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1072–1080.Google ScholarGoogle ScholarCross RefCross Ref
  14. Tilke Judd, Krista Ehinger, Frédo Durand, and Antonio Torralba. 2009. Learning to predict where humans look. In 2009 IEEE 12th international conference on computer vision. IEEE, 2106–2113.Google ScholarGoogle ScholarCross RefCross Ref
  15. Ronak Kosti, Jose M Alvarez, Adria Recasens, and Agata Lapedriza. 2017. Emotion recognition in context. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1667–1675.Google ScholarGoogle ScholarCross RefCross Ref
  16. Philip A Kragel, Marianne C Reddan, Kevin S LaBar, and Tor D Wager. 2019. Emotion schemas are embedded in the human visual system. Science advances 5, 7 (2019), eaaw4358.Google ScholarGoogle Scholar
  17. Matthias Kummerer, Thomas SA Wallis, Leon A Gatys, and Matthias Bethge. 2017. Understanding low-and high-level contributions to fixation prediction. In Proceedings of the IEEE International Conference on Computer Vision. 4789–4798.Google ScholarGoogle ScholarCross RefCross Ref
  18. Benedek Kurdi, Shayn Lozano, and Mahzarin R Banaji. 2017. Introducing the open affective standardized image set (OASIS). Behavior research methods 49, 2 (2017), 457–470.Google ScholarGoogle Scholar
  19. Peter J Lang, Margaret M Bradley, and Bruce N Cuthbert. 2008. International affective picture system (IAPS): affective ratings of pictures and instruction manual. University of Florida, Gainesville. Technical Report. Tech Rep A-8.Google ScholarGoogle Scholar
  20. Fei-Fei Li. [n.d.]. Stanford HAI 2019 - Introduction to Stanford HAI: Fei-Fei Li. YouTube. https://youtu.be/XnhfeNDc0eI?t=600Google ScholarGoogle Scholar
  21. Jana Machajdik and Allan Hanbury. 2010. Affective image classification using features inspired by psychology and art theory. In Proceedings of the 18th ACM international conference on Multimedia. 83–92.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Joseph A Mikels, Barbara L Fredrickson, Gregory R Larkin, Casey M Lindberg, Sam J Maglio, and Patricia A Reuter-Lorenz. 2005. Emotional category data on images from the International Affective Picture System. Behavior research methods 37, 4 (2005), 626–630.Google ScholarGoogle Scholar
  23. Ali Mollahosseini, Behzad Hasani, and Mohammad H Mahoor. 2017. Affectnet: A database for facial expression, valence, and arousal computing in the wild. IEEE Transactions on Affective Computing 10, 1 (2017), 18–31.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Charles Egerton Osgood, George J Suci, and Percy H Tannenbaum. 1957. The measurement of meaning. Number 47. University of Illinois press.Google ScholarGoogle Scholar
  25. Michael J Owren and Drew Rendall. 1997. An affect-conditioning model of nonhuman primate vocal signaling. In Communication. Springer, 299–346.Google ScholarGoogle Scholar
  26. Rameswar Panda, Jianming Zhang, Haoxiang Li, Joon-Young Lee, Xin Lu, and Amit K Roy-Chowdhury. 2018. Contemplating visual emotions: Understanding and overcoming dataset bias. In Proceedings of the European Conference on Computer Vision (ECCV). 579–595.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Kuan-Chuan Peng, Tsuhan Chen, Amir Sadovnik, and Andrew C Gallagher. 2015. A mixed bag of emotions: Model, predict, and transfer emotion distributions. In Proceedings of the IEEE conference on computer vision and pattern recognition. 860–868.Google ScholarGoogle ScholarCross RefCross Ref
  28. Rosling Picard. 1997. Affective computing. cambridge, massachustes institure of technology.Google ScholarGoogle Scholar
  29. Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision. 618–626.Google ScholarGoogle ScholarCross RefCross Ref
  30. Monique AM Smeets, Egge AE Rosing, Doris M Jacobs, Ewoud van Velzen, Jean H Koek, Cor Blonk, Ilse Gortemaker, Marloes B Eidhof, Benyamin Markovitch, Jasper de Groot, 2020. Chemical fingerprints of emotional body odor. Metabolites 10, 3 (2020), 84.Google ScholarGoogle ScholarCross RefCross Ref
  31. Carlo Strapparava and Rada Mihalcea. 2007. Semeval-2007 task 14: Affective text. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007). 70–74.Google ScholarGoogle ScholarCross RefCross Ref
  32. Yichen Wang and Aditya Pal. 2015. Detecting emotions in social media: A constrained optimization approach. In Twenty-fourth international joint conference on artificial intelligence.Google ScholarGoogle Scholar
  33. Zijun Wei, Jianming Zhang, Zhe Lin, Joon-Young Lee, Niranjan Balasubramanian, Minh Hoai, and Dimitris Samaras. 2020. Learning visual emotion representations from web data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13106–13115.Google ScholarGoogle ScholarCross RefCross Ref
  34. Victoria Yanulevskaya, Jan C van Gemert, Katharina Roth, Ann-Katrin Herbold, Nicu Sebe, and Jan-Mark Geusebroek. 2008. Emotional valence categorization using holistic image features. In 2008 15th IEEE international conference on Image Processing. IEEE, 101–104.Google ScholarGoogle ScholarCross RefCross Ref
  35. Quanzeng You, Jiebo Luo, Hailin Jin, and Jianchao Yang. 2016. Building a large scale dataset for image emotion recognition: The fine print and the benchmark. In Proceedings of the AAAI conference on artificial intelligence, Vol. 30.Google ScholarGoogle ScholarCross RefCross Ref
  36. Lei Zhang, Shuai Wang, and Bing Liu. 2018. Deep learning for sentiment analysis: A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 8, 4(2018), e1253.Google ScholarGoogle ScholarCross RefCross Ref
  1. Affect-Predictive Models: Predicting Emotional Responses Directly to Stimuli

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      WI-IAT '21: IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology
      December 2021
      541 pages
      ISBN:9781450391870
      DOI:10.1145/3498851

      Copyright © 2021 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 April 2022

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited
    • Article Metrics

      • Downloads (Last 12 months)28
      • Downloads (Last 6 weeks)2

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format