Skip to main content
Log in

Identifying emotions in images from valence and arousal ratings

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Experimental studies of emotion usually use datasets of normative emotional pictures to elicit specific emotional responses in human subjects. However, most of these datasets are not annotated with discriminating and reliable emotional tags, having only valence and arousal ratings for each image. Complementing this information with emotional tags would enrich the datasets, by increasing the number of annotated images available and consequently reducing the use of the same images in consecutive studies. This paper describes a multi-label recognizer that combines a Fuzzy approach with a Random Forest classifier to recognize both polarity and discrete emotions elicited by an image, using its valence and arousal ratings. Polarity indicates whether the emotional content of the image is negative, neutral, or positive, whereas emotions provide a more detailed description of the emotional content conveyed by the image. We evaluated our multi-label recognizer using pictures from four existing datasets containing images annotated with emotional content and valence and arousal ratings. Experimental results show that our recognizer is able to identify polarity with a precision of 84.8%, single emotions with 80.7%, and two emotions with 81.1%. Our recognizer can be useful to researchers who want to identify polarity and/or emotions from stimuli annotated with valence and arousal ratings. In particular, it can be used to automatically annotate with emotional tags already existent image datasets, avoiding the costs of manually annotating them with human subjects.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. In our recognizer we used weka 3.6.13 (http://www.cs.waikato.ac.nz/ml/weka/)

  2. 2 http://www.di.fc.ul.pt/~mjf/research/emoRecognizer/datasets

  3. 3 www.di.fc.ul.pt/~mjf/research/emoRecognizer/datasets

References

  1. Borth D, Ji R, Chen T, Breuel T, Chang SF (2013) Large-scale visual sentiment ontology and detectors using adjective noun pairs. In: International conference on multimedia, pp 223–232

  2. Bradley M, Lang P (1994) Measuring emotion: the self-assessment manikin and the semantic differential. J Behav Ther Exp Psychiatry 25(1):49–59

    Article  Google Scholar 

  3. Breiman L (2001) Random forests. Mach Learn 45(1):5–32

    Article  MATH  Google Scholar 

  4. Bretherton I, Beeghly M (1982) Talking about internal states: the acquisition of an explicit theory of mind. Dev Psychol 18(6):906–921

    Article  Google Scholar 

  5. Cambria E, Gastaldo P, Bisio F, Zunino R (2015) An ELM-based model for affective analogical reasoning. Neurocomputing 149(Part A):443–455

    Article  Google Scholar 

  6. le Cessie S, van Houwelingen J (1992) Ridge estimators in logistic regression. J Appl Stat 41(1):191–201

    Article  MATH  Google Scholar 

  7. Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. Trans Intell Syst Technol 2(3):1–27

    Article  Google Scholar 

  8. Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP (2002) SMOTE: synthetic minority over-sampling technique. J Artif Intell Res 16:321–357

    MATH  Google Scholar 

  9. Chen CH, Weng M, Jeng S, Chuang Y (2008) Emotion-based music visualization using photos. In: Conference on advances in multimedia modeling, pp 1–11

  10. Chen YY, Chen T, Hsu WH, Liao HYM, Chang SF (2014) Predicting viewer affective comments based on image content in social media. In: International conference in multimedia retrieval, pp 233:233–233:240

  11. Cios K, Pedrycz W, Swiniarski R, Kurgan L (2007) Data mining: a knowledge discovery approach. Springer US

  12. Codispoti M, De Cesarei A (2007) Arousal and attention: picture size and emotional reactions. Psychophysiology 44(5):680–6

    Article  Google Scholar 

  13. Dan-Glauser ES, Scherer KR (2011) The Geneva affective picture database (GAPED): a new 730-picture database focusing on valence and normative significance. Behav Res Methods 43:468–77

    Article  Google Scholar 

  14. Dice LR (1945) Measures of the amount of ecologic association between species. Ecology 26(3):297–302

    Article  Google Scholar 

  15. Dietz R, Lang A (1999) Affective agents: effects of agent affect on arousal, attention, liking and learning. In: Cognitive technology

  16. Dunker P, Nowak S, Begau A, Lanz C (2008) Content-based mood classification for photos and music: a generic multi-modal classification framework and evaluation approach. In: International conference on multimedia retrieval, pp 97–104

  17. Ekman P (1999) Basic emotions. Wiley

  18. El-Nasr M, Yen J (1998) Agents, emotional intelligence and fuzzy logic. In: Conference of the north american fuzzy information processing society, pp 301–305

  19. Esuli A, Sebastiani F (2006) SENTIWORDNET: a publicly available lexical resource for opinion mining. In: Conference on language resources and evaluation, pp 417–422

  20. Feng S, Wang D, Yu G, Gao W, Wong KF (2011) Extracting common emotions from blogs based on fine-grained sentiment clustering. Knowl Inf Syst 27 (2):281–302

    Article  Google Scholar 

  21. Fernández-Delgado M, Cernadas E, Barro S, Amorim D (2014) Do we need hundreds of classifiers to solve real world classification problems? J Mach Learn Res 15 (1):3133–3181

    MathSciNet  MATH  Google Scholar 

  22. Fox E (2008) Emotion science: cognitive and neuroscientific approaches to understanding human emotions. Palgrave Macmillan

  23. Francisco V, Gervás P, Peinado F (2010) Ontological reasoning for improving the treatment of emotions in text. Knowl Inf Syst 25(3):421–443

    Article  Google Scholar 

  24. Ganganwar V (2012) An overview of classification algorithms for imbalanced datasets. Int J Emerg Technol Adv Eng 2(4):42–47

    Google Scholar 

  25. Gbèhounou S, Lecellier F, Fernandez-maloigne C, Cnrs UMR (2012) Extraction of emotional impact in colour images. In: International conference on computer graphics, imaging and visualization

  26. Hall M, Frank E, Holmes G (2009) The WEKA data mining software: an update. SIGKDD Explorations Newsletter 11(1):10–18

    Article  Google Scholar 

  27. Hanjalic A (2006) Extracting moods from pictures and sounds: towards truly personalized TV. Signal Process Mag 23(2):90–100

    Article  Google Scholar 

  28. van der Heide A, Sánchez D, Triviño G (2011) Computational models of affect and fuzzy logic. In: Conference of the european society for fuzzy logic and technology, pp 620–627

  29. Horvat M, Popovic S, Cosic K (2013) Multimedia stimuli databases usage patterns: a survey report. In: International convention on information and communication technology, electronics and microelectronicscommunication technology, electronics and microelectronics

  30. Islam J, Zhang Y (2016) Visual sentiment analysis for social images using transfer learning approach. In: IEEE international conferences on big data and cloud computing (BDCloud), social computing and networking (SocialCom), sustainable computing and communications (SustainCom) (BDCloud-SocialCom-SustainCom), pp 124–130

  31. Jang JSR, Sun CT (1997) Neuro-fuzzy and soft computing: a computational approach to learning and machine intelligence. Prentice-Hall Inc

  32. Jiang ZQ, Li WH, Liu Y, Luo YJ, Luu P, Tucker DM (2014) When affective word valence meets linguistic polarity: Behavioral and ERP evidence. J Neurolinguistics 28:19–30

    Article  Google Scholar 

  33. John GH, Langley P (1995) Estimating continuous distributions in bayesian classifiers. In: Conference on uncertainty in artificial intelligence, pp 338–345

  34. Jou B, Chen T, Pappas N, Redi M, Topkara M, Chang SF (2015) Visual affect around the world. In: International conference on multimedia, vol 1, pp 159–168

  35. Katsimerou C, Albeda J, Huldtgren A, Heynderickx I, Redi JA (2016) Crowdsourcing empathetic intelligence: the case of the annotation of emma database for emotion and mood recognition. Trans Intell Syst Technol 7(4):51:1–51:27

    Google Scholar 

  36. Kim Y, Shin Y, Kim SJ, Kim EY, Shin H (2009) EBIR: emotion-based image retrieval. In: International conference on conference consumer, pp 3–4

  37. Kittler J, Hatef M, Duin RP, Matas J (1998) On combining classifiers. Trans Pattern Anal Mach Intell 20(3):226–239

    Article  Google Scholar 

  38. Kuncheva LI (2004) Combining pattern classifiers: methods and algorithms. Wiley

  39. Kurdi B, Lozano S, Banaji MR (2016) Introducing the open affective standardized image set (OASIS). Behav Res Methods 49(2):457–470

    Article  Google Scholar 

  40. Lane RD, Reiman EM, Ahern GL, Schwartz GE, Davidson RJ (1997) Neuroanatomical correlates of happiness, sadness, and disgust. Am J Psychiatr 154 (7):926–933

    Article  Google Scholar 

  41. Lang P, Bradley M, Cuthbert B (1997) International affective picture system (IAPS): affective ratings of pictures and instruction manual. NIMH Center for the Study of Emotion and Attention

  42. Machajdik J, Hanbury A (2010) Affective image classification using features inspired by psychology and art theory. In: ACM multimedia conference, pp 83–92

  43. Marchewka A, Zurawski Ł, Jednoróg K, Grabowska A (2014) The nencki affective picture system (NAPS): introduction to a novel, standardized, wide-range, high-quality, realistic picture database. Behav Res Methods 46(2):596–610

    Article  Google Scholar 

  44. Mehrabian A, Russell J (1974) An approach to environmental psychology. M.I.T Press

  45. Meneses Alarcão S (2014) Emophoto: identification of emotions in photos. M.S. thesis, IST/ULisbon

  46. Mikels JA, Fredrickson BL, Larkin GR, Lindberg CM, Maglio SJ, Reuter-Lorenz PA (2005) Emotional category data on images from the international affective picture system. Behav Res Methods 37(4):626–630

    Article  Google Scholar 

  47. Olkiewicz KA, Markowska-kaczmar U (2010) Emotion-based image retrieval — an artificial neural network approach. In: International multiconference on computer science and information technology, pp 89–96

  48. Ortony A, Turner TJ (1990) What’s basic about basic emotions? Psychol Rev 97(3):315–331

    Article  Google Scholar 

  49. Parrott W (2001) Emotions in social psychology: essential. Readings Psychology Press

  50. Peng KC, Chen T, Sadovnik A, Gallagher AC (2015) A mixed bag of emotions: model, predict, and transfer emotion distributions. In: Conference on computer vision and pattern recognition, pp 860–868

  51. Peng KC, Karlsson K, Chen T, Zhang DQ, Yu H (2014) A framework of changing image emotion using emotion prediction. In: International conference on image processing

  52. Pesenko YA (1982) Principles and methods of quantitative analysis in faunistical researches. Moscow (Nauka) [in Russian]

  53. Picard R (1995) Affective computing. Tech Rep 321, MIT Media Laboratory, Perceptual Computing Section

  54. Plutchik R (2001) The nature of emotions: human emotions have deep evolutionary roots. American Scientist 89(4):344–350

    Article  Google Scholar 

  55. Posner J, Russell JA, Peterson BS (2005) The circumplex model of affect: an integrative approach to affective neuroscience, cognitive development, and psychopathology. Dev Psychopathol 17 (3):715– 734

    Article  Google Scholar 

  56. Riegel M, Zurawski Ł, Wierzba M, Moslehi A, Klocek Ł, Horvat M, Grabowska A, Michałowski J, Jednoróg K, Marchewka A (2016) Characterization of the nencki affective picture system by discrete emotional categories (NAPS BE). Behav Res Methods 48(2):600–612

    Article  Google Scholar 

  57. Russell JA, Carroll JM (1999) On the bipolarity of positive and negative affect. Psychol Bull 125(1):3–30

    Article  Google Scholar 

  58. Schmidt S, Stock W (2009) Collective indexing of emotions in images. A study in emotional information retrieval. J Am Soc Inf Sci Technol 60(5):863–876

    Article  Google Scholar 

  59. Shaver P, Schwartz J, Kirson D, O’Connor C (1987) Emotion knowledge: further exploration of a prototype approach. J Pers Soc Psychol 52(6):1061–1086

    Article  Google Scholar 

  60. Shiota MN, Keltner D, Mossman A (2007) The nature of awe: elicitors, appraisals, and effects on self-concept. Cognit Emot 21(5):944–963

    Article  Google Scholar 

  61. Smith A (2004) A new set of norms. Behavior Research Methods, Instruments, and Computers, (3x(x), xx)

  62. Smith A (2004) Smith2004norms.txt. Retrieved October 2 2004 from Psychonomic Society Web Archieve

  63. Sørensen TA (1948) A method of establishing groups of equal amplitude in plant sociology based on similarity of species content, and its application to analyses of the vegetation on {Danish} commons. Biologiske Skrifter 15:1–34

    Google Scholar 

  64. Sorower MS (2010) A literature survey on algorithms for multi-label learning. Tech rep

  65. Sun M, Yang J, Wang K, Shen H (2016) Discovering affective regions in deep convolutional neural networks for visual sentiment prediction. In: IEEE international conference on multimedia and expo

  66. Valdez P, Mehrabian A (1994) Effects of color on emotion. J Exp Psychol Gen 123(4):394–409

    Article  Google Scholar 

  67. Wang X, Jia J, Yin J, Cai L (2013) Interpretable aesthetic features for affective image classification. In: International conference on image processing, pp 3230–3234

  68. You Q, Luo J, Jin H, Yang J (2015) Robust image sentiment analysis using progressively trained and domain transferred deep networks. In: Association for the advancement of artificial intelligence

  69. You Q, Luo J, Jin H, Yang J (2016) Building a large scale dataset for image emotion recognition: the fine print and the benchmark. In: Association for the advancement of artificial intelligence

  70. Zadeh LA (1965) Fuzzy sets. Inf Control 8:338–353

    Article  MATH  Google Scholar 

  71. Zhang H, Gönen M, Yang Z, Oja E (2013) Predicting emotional states of images using Bayesian multiple kernel learning. In: ICONIP, pp 274–282

  72. Zhao S, Gao Y, Jiang X, Yao H, Chua TS, Sun X (2014) Exploring principles-of-art features for image emotion recognition. In: ACM multimedia conference, pp 47–56

  73. Zhao S, Yao H, Jiang X (2015) Predicting continuous probability distribution of image emotions in valence-arousal space. In: ACM multimedia conference, pp 879–882

  74. Zhao S, Yao H, Jiang X, Sun X (2015) Predicting discrete probability distribution of image emotions. In: International conference on image processing, pp 2459–2463

  75. Zhao S, Yao H, Yang Y, Zhang Y (2014) Affective image retrieval via multi-graph learning. In: ACM multimedia conference, pp 1025–1028

Download references

Acknowledgements

This work was supported by national funds through Fundação para a Ciência e Tecnologia, under LASIGE Strategic Project - UID/CEC/00408/2013.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Soraia M. Alarcão.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alarcão, S.M., Fonseca, M.J. Identifying emotions in images from valence and arousal ratings. Multimed Tools Appl 77, 17413–17435 (2018). https://doi.org/10.1007/s11042-017-5311-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-017-5311-8

Keywords

Navigation