Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12194))

Included in the following conference series:

Abstract

Posters are widely used a powerful tool for communication. They are very informative but are normally viewed for only 3 s, which calls for efficient and effective information delivery. It is thus important to know where people would look for posters. Saliency models could be of great help where expensive and time-consuming eye-tracking experiment isn’t an option. However, current datasets for saliency model training mainly deal with natural scenes, which makes research on saliency models for posters difficult. To address this problem, we collected 1700 high-quality posters as well as their eye-tracking data where each image is viewed by 15 participants. This could be the groundwork for future research in the field of saliency prediction for posters. It is noticeable that posters are rich in texts (e.g. title, slogan, description paragraph). The various types of texts serve respective functions, making some relatively more important than others. Nevertheless, the difference is largely neglected in current studies where researchers put same emphasis on all text regions, and the problem is especially crucial when it comes to saliency model for posters. Our further analysis of the eye-tracking results with focus on text offers some insights into the issue.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hutton, S.B., Nolte, S.: The effect of gaze cues on attention to print advertisements. Appl. Cogn. Psychol. 25(6), 887–892 (2011)

    Article  Google Scholar 

  2. Xu, P., Ehinger, K.A., Zhang, Y.: TurkerGaze: Crowdsourcing Saliency with Webcam based Eye Tracking. Computer Science (2015)

    Google Scholar 

  3. Kim, N.W., et al.: BubbleView: an interface for crowdsourcing image importance maps and tracking visual attention. ACM Trans. Comput. Hum. Interact. 24(5), 1–40 (2017)

    Article  Google Scholar 

  4. Judd, T., Durand, F., Torralba, A.: A benchmark of computational models of saliency to predict human fixations (2012)

    Google Scholar 

  5. Judd, T., Ehinger, K., Durand, F.: Learning to predict where humans look. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 2106–2113. IEEE (2009)

    Google Scholar 

  6. Ramanathan, S., Katti, H., Sebe, N., Kankanhalli, M., Chua, T.-S.: An eye fixation database for saliency detection in images. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 30–43. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15561-1_3

    Chapter  Google Scholar 

  7. Murray, N., Marchesotti, L., Perronnin, F.: AVA: a large-scale database for aesthetic visual analysis. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2408–2415. IEEE (2012)

    Google Scholar 

  8. Borji, A., Itti, L.: Cat2000: A large scale fixation dataset for boosting saliency research. arXiv preprint arXiv:1505.03581 (2015)

  9. Shen, C., Zhao, Q.: Webpage saliency. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part VII. LNCS, vol. 8695, pp. 33–46. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10584-0_3

    Chapter  Google Scholar 

  10. Frintrop, S., Rome, E., Christensen, H.I.: Computational visual attention systems and their cognitive foundations: a survey. ACM Trans. Appl. Percept. (TAP) 7(1), 1–39 (2010)

    Article  Google Scholar 

  11. Kümmerer, M., Theis, L., Bethge, M.: Deep gaze I: boosting saliency prediction with feature maps trained on imagenet. arXiv preprint arXiv:1411.1045 (2014)

  12. Bylinskii, Z., Recasens, A., Borji, A., Oliva, A., Torralba, A., Durand, F.: Where Should Saliency Models Look Next? In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016, Part V. LNCS, vol. 9909, pp. 809–824. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46454-1_49

    Chapter  Google Scholar 

  13. Kruthiventi, S.S., Ayush, K., Babu, R.V.: Deepfix: a fully convolutional neural network for predicting human eye fixations. IEEE Trans. Image Process. 26(9), 4446–4456 (2017)

    Article  MathSciNet  Google Scholar 

  14. Tseng, P.H., Carmi, R., Cameron, I.G., Munoz, D.P., Itti, L.: Quantifying center bias of observers in free viewing of dynamic natural scenes. J. Vis. 9(7), 4–4 (2009)

    Article  Google Scholar 

  15. Zhang, L., Tong, M.H., Marks, T.K., Shan, H., Cottrell, G.: WSUN: a Bayesian framework for saliency using natural statistics. J. Vis. 8(7), 32–32 (2008)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Liqun Zhang or Xiaodong Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fang, Y., Zhu, L., Cao, X., Zhang, L., Li, X. (2020). Visual Saliency: How Text Influences. In: Meiselwitz, G. (eds) Social Computing and Social Media. Design, Ethics, User Behavior, and Social Network Analysis. HCII 2020. Lecture Notes in Computer Science(), vol 12194. Springer, Cham. https://doi.org/10.1007/978-3-030-49570-1_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-49570-1_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-49569-5

  • Online ISBN: 978-3-030-49570-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics