Skip to main content

EAID: An Eye-Tracking Based Advertising Image Dataset with Personalized Affective Tags

  • Conference paper
  • First Online:
Advances in Computer Graphics (CGI 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14495))

Included in the following conference series:

  • 193 Accesses

Abstract

Contrary to natural images with randomized content, advertisements contain abundant emotion-eliciting manufactured scenes and multi-modal visual elements with highly related semantics. However, little research has evaluated the interrelationships of advertising vision and affective perception. The absence of advertising data sets with affective labels and visual attention benchmarks is one of the most pressing issues that have to be addressed. Meanwhile, growing evidence indicates that eye movements can reveal the internal states of human minds. Inspired by these, we use a high-precision eye tracker to record the eye-moving data of 57 subjects when they observe 1000 advertising images. 7-score opinion ratings for the five advertising attributes (i.e., ad liking, emotional, aesthetic, functional, and brand liking) are then collected. We further make a preliminary analysis of the correlation among advertising attributes, subjects’ visual attention, eye movement characteristics, and personality traits, obtaining a series of enlightening conclusions. To our best knowledge, the proposed dataset is the largest advertising image dataset based on eye tracking and with multiple personalized affective tags. It provides a new exploration space and data foundation for multimedia visual analysis and affection computing community. The data are available at: https://github.com/lscumt/EAID.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1597–1604. IEEE (2009)

    Google Scholar 

  2. Barnett, T., Pearson, A.W., Pearson, R., Kellermanns, F.W.: Five-factor model personality traits as predictors of perceived and actual usage of technology. Eur. J. Inf. Syst. 24, 374–390 (2015)

    Article  Google Scholar 

  3. Chen, Z., Song, W.: Factors affecting human visual behavior and preference for sneakers: an eye-tracking study. Front. Psychol. 13, 914321 (2022)

    Article  Google Scholar 

  4. Jeck, D.M., Qin, M., Egeth, H., Niebur, E.: Attentive pointing in natural scenes correlates with other measures of attention. Vision. Res. 135, 54–64 (2017)

    Article  Google Scholar 

  5. Jiang, H., Hu, Z., Zhao, X., Yang, L., Yang, Z.: Exploring the users’ preference pattern of application services between different mobile phone brands. IEEE Trans. Comput. Soc. Syst. 5(4), 1163–1173 (2018)

    Article  Google Scholar 

  6. Jiang, H., Liang, J., Wang, H., Sun, P.: The interplay of emotions, elaboration, and ambivalence on attitude-behavior consistency. J. Consum. Behav. 15(2), 126–135 (2016)

    Article  Google Scholar 

  7. Jiang, N., Sheng, B., Li, P., Lee, T.Y.: Photohelper: portrait photographing guidance via deep feature retrieval and fusion. IEEE Trans. Multimedia 25, 2226–2238 (2023)

    Article  Google Scholar 

  8. Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 2106–2113. IEEE (2009)

    Google Scholar 

  9. Liang, S., Liu, R., Qian, J.: Fixation prediction for advertising images: dataset and benchmark. J. Vis. Commun. Image Represent. 81, 103356 (2021)

    Article  Google Scholar 

  10. Melcher, D., Morrone, M.C.: Spatiotopic temporal integration of visual motion across saccadic eye movements. Nat. Neurosci. 6(8), 877–881 (2003)

    Article  Google Scholar 

  11. Milosavljevic, M., Cerf, M.: First attention then intention: Insights from computational neuroscience of vision. Int. J. Advert. 27(3), 381–398 (2008)

    Article  Google Scholar 

  12. Peng, K.C., Sadovnik, A., Gallagher, A., Chen, T.: Where do emotions come from? Predicting the emotion stimuli map. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 614–618. IEEE (2016)

    Google Scholar 

  13. Rayner, K., Castelhano, M.S.: Eye movements during reading, scene perception, visual search, and while looking at print advertisements (2008)

    Google Scholar 

  14. Tatler, B.W., Wade, N.J., Kwan, H., Findlay, J.M., Velichkovsky, B.M.: Yarbus, eye movements, and vision. i-Perception 1(1), 7–27 (2010)

    Google Scholar 

  15. Wen, Y.: Structure-aware motion deblurring using multi-adversarial optimized cyclegan. IEEE Trans. Image Process. 30, 6142–6155 (2021)

    Article  Google Scholar 

  16. Xia, H., Lu, L., Song, S.: Feature fusion of multi-granularity and multi-scale for facial expression recognition. Vis. Comput. pp. 1–13 (2023)

    Google Scholar 

  17. Yang, J., She, D., Lai, Y.K., Rosin, P.L., Yang, M.H.: Weakly supervised coupled networks for visual sentiment analysis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7584–7592 (2018)

    Google Scholar 

  18. Yang, Y., Xu, B., Shen, S., Shen, F., Zhao, J.: Operation-aware neural networks for user response prediction. Neural Netw. 121, 161–168 (2020)

    Article  Google Scholar 

  19. You, Q., Jin, H., Luo, J.: Visual sentiment analysis by attending on local image regions. In: Proceedings of the AAAI conference on artificial intelligence, vol. 31 (2017)

    Google Scholar 

  20. Zhang, J., Hou, W., Zhu, X., Wei, Y.: Analysis of situation map user cognitive characteristics based on eye movement data. In: Yamamoto, S., Mori, H. (eds.) HCII 2022, LNCS, vol. 13305, pp. 282–294. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-06424-1_21

    Chapter  Google Scholar 

  21. Zhu, H., Zhou, Y., Li, L., Li, Y., Guo, Y.: Learning personalized image aesthetics from subjective and objective attributes. IEEE Trans. Multimedia 25, 179–190 (2021)

    Article  Google Scholar 

Download references

Acknowledgements

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiansheng Qian .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liang, S., Liu, R., Qian, J. (2024). EAID: An Eye-Tracking Based Advertising Image Dataset with Personalized Affective Tags. In: Sheng, B., Bi, L., Kim, J., Magnenat-Thalmann, N., Thalmann, D. (eds) Advances in Computer Graphics. CGI 2023. Lecture Notes in Computer Science, vol 14495. Springer, Cham. https://doi.org/10.1007/978-3-031-50069-5_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-50069-5_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-50068-8

  • Online ISBN: 978-3-031-50069-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics