skip to main content
10.1145/2964284.2964289acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Predicting Personalized Emotion Perceptions of Social Images

Published: 01 October 2016 Publication History

Abstract

Images can convey rich semantics and induce various emotions to viewers. Most existing works on affective image analysis focused on predicting the dominant emotions for the majority of viewers. However, such dominant emotion is often insufficient in real-world applications, as the emotions that are induced by an image are highly subjective and different with respect to different viewers. In this paper, we propose to predict the personalized emotion perceptions of images for each individual viewer. Different types of factors that may affect personalized image emotion perceptions, including visual content, social context, temporal evolution, and location influence, are jointly investigated. Rolling multi-task hypergraph learning is presented to consistently combine these factors and a learning algorithm is designed for automatic optimization. For evaluation, we set up a large scale image emotion dataset from Flickr, named Image-Emotion-Social-Net, on both dimensional and categorical emotion representations with over 1 million images and about 8,000 users. Experiments conducted on this dataset demonstrate that the proposed method can achieve significant performance gains on personalized emotion classification, as compared to several state-of-the-art approaches.

References

[1]
S. Benini, L. Canini, and R. Leonardi. A connotative space for supporting movie affective recommendation. IEEE Transactions on Multimedia, 13(6):1356--1370, 2011.
[2]
D. Borth, R. Ji, T. Chen, T. Breuel, and S.-F. Chang. Large-scale visual sentiment ontology and detectors using adjective noun pairs. In ACM MM, pages 223--232, 2013.
[3]
J. Bu, S. Tan, C. Chen, C. Wang, H. Wu, L. Zhang, and X. He. Music recommendation by unified hypergraph: combining social media information and music content. In ACM MM, pages 391--400, 2010.
[4]
T. Chen, F. X. Yu, J. Chen, Y. Cui, Y.-Y. Chen, and S.-F. Chang. Object-based visual sentiment concept analysis and application. In ACM MM, pages 367--376, 2014.
[5]
Y.-Y. Chen, T. Chen, W. H. Hsu, H.-Y. M. Liao, and S.-F. Chang. Predicting viewer affective comments based on image content in social media. In ICMR, page 233, 2014.
[6]
P. Cui, S. Jin, L. Yu, F. Wang, W. Zhu, and S. Yang. Cascading outbreak prediction in networks: a data-driven approach. In SIGKDD, pages 901--909, 2013.
[7]
N. H. Frijda. The emotions. Cambridge University Press, 1986.
[8]
Y. Gao, F. Wang, H. Luan, and T.-S. Chua. Brand data gathering from live social media streams. In ICMR, page 169, 2014.
[9]
Y. Gao, M. Wang, D. Tao, R. Ji, and Q. Dai. 3-d object retrieval and recognition with hypergraph analysis. IEEE Transactions on Image Processing, 21(9):4290--4303, 2012.
[10]
Y. Gao, S. Zhao, Y. Yang, and T.-S. Chua. Multimedia social event detection in microblog. In MMM, pages 269--281, 2015.
[11]
K. A. Goyal and A. Sadasivam. A critical analysis of rational & emotional approaches in car selling. International Journal of Business Research and Management, 1(2):59--63, 2010.
[12]
A. Hanjalic. Extracting moods from pictures and sounds: Towards truly personalized tv. IEEE Signal Processing Magazine, 23(2):90--100, 2006.
[13]
A. Hanjalic and L.-Q. Xu. Affective video content representation and modeling. IEEE Transactions on Multimedia, 7(1):143--154, 2005.
[14]
Y. Huang, Q. Liu, S. Zhang, and D. Metaxas. Image retrieval via probabilistic hypergraph ranking. In CVPR, pages 3376--3383, 2010.
[15]
G. Irie, T. Satou, A. Kojima, T. Yamasaki, and K. Aizawa. Affective audio-visual words and latent topic driving model for realizing movie affective scene classification. IEEE Transactions on Multimedia, 12(6):523--535, 2010.
[16]
J. Jia, S. Wu, X. Wang, P. Hu, L. Cai, and J. Tang. Can we understand van gogh's mood? learning to infer affects from images in social networks. In ACM MM, pages 857--860, 2012.
[17]
D. Joshi, R. Datta, E. Fedorovskaya, Q. Luong, J. Z. Wang, J. Li, and J. Luo. Aesthetics and emotions in images. IEEE Signal Processing Magazine, 28(5):94--115, 2011.
[18]
P. N. Juslin and P. Laukka. Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of New Music Research, 33(3):217--238, 2004.
[19]
B. Li, W. Xiong, W. Hu, and X. Ding. Context-aware affective images classification based on bilayer sparse representation. In ACM MM, pages 721--724, 2012.
[20]
X. Lu, P. Suryanarayan, R. B. Adams Jr, J. Li, M. G. Newman, and J. Z. Wang. On shape and the computability of emotions. In ACM MM, pages 229--238, 2012.
[21]
J. Machajdik and A. Hanbury. Affective image classification using features inspired by psychology and art theory. In ACM MM, pages 83--92, 2010.
[22]
J. A. Mikels, B. L. Fredrickson, G. R. Larkin, C. M. Lindberg, S. J. Maglio, and P. A. Reuter-Lorenz. Emotional category data on images from the international affective picture system. Behavior Research Methods, 37(4):626--630, 2005.
[23]
B. Pang and L. Lee. Opinion mining and sentiment analysis. Information Retrieval, 2(1--2):1--135, 2008.
[24]
G. Patterson and J. Hays. Sun attribute database: Discovering, annotating, and recognizing scene attributes. In CVPR, pages 2751--2758, 2012.
[25]
K.-C. Peng, A. Sadovnik, A. Gallagher, and T. Chen. A mixed bag of emotions: Model, predict, and transfer emotion distributions. In CVPR, pages 860--868, 2015.
[26]
T. M. Rath and R. Manmatha. Word image matching using dynamic time warping. In CVPR, pages 521--527, 2003.
[27]
T. Reuter, P. Cimiano, L. Drumond, K. Buza, and L. Schmidt-Thieme. Scalable event-based clustering of social media via record linkage techniques. In ICWSM, 2011.
[28]
K. R. Scherer. What are emotions? and how can they be measured? Social Science Information, 44(4):695--729, 2005.
[29]
H. Schlosberg. Three dimensions of emotion. Psychological Review, 61(2):81, 1954.
[30]
S. Siersdorfer, E. Minack, F. Deng, and J. Hare. Analyzing and predicting sentiment of images on the social web. In ACM MM, pages 715--718, 2010.
[31]
R. C. Solomon. The passions: Emotions and the meaning of life. Hackett Publishing, 1993.
[32]
J. Tang, Y. Zhang, J. Sun, J. Rao, W. Yu, Y. Chen, and A. C. M. Fong. Quantitative study of individual emotional states in social networks. IEEE Transactions on Affective Computing, 3(2):132--144, 2012.
[33]
M. Tkal\vci\vc, U. Burnik, and A. Ko\vsir. Using affective parameters in a content-based recommender system for images. User Modeling and User-Adapted Interaction, 20(4):279--311, 2010.
[34]
A. Tumasjan, T. O. Sprenger, P. G. Sandner, and I. M. Welpe. Predicting elections with twitter: What 140 characters reveal about political sentiment. In ICWSM, volume 10, pages 178--185, 2010.
[35]
P. Verduyn and S. Lavrijsen. Which emotions last longest and why: The role of event importance and rumination. Motivation and Emotion, 39(1):119--127, 2015.
[36]
M. Wang, X.-S. Hua, R. Hong, J. Tang, G.-J. Qi, and Y. Song. Unified video annotation via multigraph learning. IEEE Transactions on Circuits and Systems for Video Technology, 19(5):733--746, 2009.
[37]
W.-n. Wang, Y.-l. Yu, and S.-m. Jiang. Image retrieval by emotional semantics: A study of emotional space and feature extraction. In IEEE SMC, pages 3534--3539, 2006.
[38]
A. B. Warriner, V. Kuperman, and M. Brysbaert. Norms of valence, arousal, and dominance for 13,915 english lemmas. Behavior Research Methods, 45(4):1191--1207, 2013.
[39]
J. Whitfield. The secret of happiness: grinning on the internet. Nature, 26, 2008.
[40]
P. Yang, Q. Liu, and D. N. Metaxas. Exploring facial expressions with compositional features. In CVPR, pages 2638--2644, 2010.
[41]
Y. Yang, P. Cui, W. Zhu, and S. Yang. User interest and social influence based emotion prediction for individuals. In ACM MM, pages 785--788, 2013.
[42]
Y. Yang, P. Cui, W. Zhu, H. V. Zhao, Y. Shi, and S. Yang. Emotionally representative image discovery for social events. In ICMR, page 177, 2014.
[43]
Y. Yang, J. Jia, S. Zhang, B. Wu, Q. Chen, J. Li, C. Xing, and J. Tang. How do your friends on social media disclose your emotions? In AAAI, pages 306--312, 2014.
[44]
V. Yanulevskaya, J. Van Gemert, K. Roth, A.-K. Herbold, N. Sebe, and J.-M. Geusebroek. Emotional valence categorization using holistic image features. In ICIP, pages 101--104, 2008.
[45]
Q. You, J. Luo, H. Jin, and J. Yang. Robust image sentiment analysis using progressively trained and domain transferred deep networks. In AAAI, pages 381--388, 2015.
[46]
J. Yuan, S. Mcdonough, Q. You, and J. Luo. Sentribute: image sentiment analysis from a mid-level perspective. In WISDOM, page 10, 2013.
[47]
H. Zhang, X. Shang, W. Yang, H. Xu, H. Luan, and T.-S. Chua. Online collaborative learning for open-vocabulary visual classifiers. In CVPR, pages 2809--2817, 2016.
[48]
S. Zhao, Y. Gao, X. Jiang, H. Yao, T.-S. Chua, and X. Sun. Exploring principles-of-art features for image emotion recognition. In ACM MM, pages 47--56, 2014.
[49]
S. Zhao, H. Yao, and X. Jiang. Predicting continuous probability distribution of image emotions in valence-arousal space. In ACM MM, pages 879--882, 2015.
[50]
S. Zhao, H. Yao, X. Jiang, and X. Sun. Predicting discrete probability distribution of image emotions. In ICIP, pages 2459--2463, 2015.
[51]
S. Zhao, H. Yao, and X. Sun. Video classification and recommendation based on affective analysis of viewers. Neurocomputing, 119:101--110, 2013.
[52]
S. Zhao, H. Yao, Y. Yang, and Y. Zhang. Affective image retrieval via multi-graph learning. In ACM MM, pages 1025--1028, 2014.
[53]
D. Zhou, J. Huang, and B. Schölkopf. Learning with hypergraphs: Clustering, classification, and embedding. In NIPS, pages 1601--1608, 2006.

Cited By

View all
  • (2025)Affective Video Content Analysis: Decade Review and New PerspectivesBig Data Mining and Analytics10.26599/BDMA.2024.90200488:1(118-144)Online publication date: Feb-2025
  • (2024)CausVSRProceedings of the Thirty-Third International Joint Conference on Artificial Intelligence10.24963/ijcai.2024/354(3196-3204)Online publication date: 3-Aug-2024
  • (2024)Label-Efficient Emotion and Sentiment AnalysisProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3689173(11300-11301)Online publication date: 28-Oct-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MM '16: Proceedings of the 24th ACM international conference on Multimedia
October 2016
1542 pages
ISBN:9781450336031
DOI:10.1145/2964284
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 October 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. hypergraph learning
  2. image emotion
  3. location influence
  4. personalized perception
  5. social context
  6. temporal evolution

Qualifiers

  • Research-article

Funding Sources

  • Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative and administered by the IDM Programme Office
  • National Natural Science Foundation of China
  • National Natural Science Foundation of China Key Program

Conference

MM '16
Sponsor:
MM '16: ACM Multimedia Conference
October 15 - 19, 2016
Amsterdam, The Netherlands

Acceptance Rates

MM '16 Paper Acceptance Rate 52 of 237 submissions, 22%;
Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)93
  • Downloads (Last 6 weeks)4
Reflects downloads up to 16 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Affective Video Content Analysis: Decade Review and New PerspectivesBig Data Mining and Analytics10.26599/BDMA.2024.90200488:1(118-144)Online publication date: Feb-2025
  • (2024)CausVSRProceedings of the Thirty-Third International Joint Conference on Artificial Intelligence10.24963/ijcai.2024/354(3196-3204)Online publication date: 3-Aug-2024
  • (2024)Label-Efficient Emotion and Sentiment AnalysisProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3689173(11300-11301)Online publication date: 28-Oct-2024
  • (2024)Bridging Visual Affective Gap: Borrowing Textual Knowledge by Learning from Noisy Image-Text PairsProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680875(602-611)Online publication date: 28-Oct-2024
  • (2024)The Generative Fairy Tale of Scary Little Red Riding HoodProceedings of the 2024 ACM International Conference on Interactive Media Experiences10.1145/3639701.3656303(129-144)Online publication date: 7-Jun-2024
  • (2024)CGLF-Net: Image Emotion Recognition Network by Combining Global Self-Attention Features and Local Multiscale FeaturesIEEE Transactions on Multimedia10.1109/TMM.2023.328976226(1894-1908)Online publication date: 2024
  • (2024)One for All: A Unified Generative Framework for Image Emotion ClassificationIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2023.334184034:8(7057-7068)Online publication date: Aug-2024
  • (2024)Learning to compose diversified prompts for image emotion classificationComputational Visual Media10.1007/s41095-023-0389-610:6(1169-1183)Online publication date: 26-Apr-2024
  • (2024)Commonly Interesting ImagesComputer Vision – ECCV 202410.1007/978-3-031-73036-8_11(180-198)Online publication date: 21-Nov-2024
  • (2023)Detection of Emotions in Artworks Using a Convolutional Neural Network Trained on Non-Artistic Images: A Methodology to Reduce the Cross-Depiction ProblemEmpirical Studies of the Arts10.1177/0276237423116348142:1(38-64)Online publication date: 16-Mar-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media