skip to main content
10.1145/3581754.3584109acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
invited-talk

Distance-based Visual Scanpath Estimation and Applications

Published:27 March 2023Publication History

ABSTRACT

Existing studies used to estimate visual attention with the bottom-up (e.g., image salient features) and top-down methods (e.g., task-driven information searching patterns) and have gained progressive results in computing probable visual attention with various image types, user groups, and viewing durations. However, these works are mostly based on fixed distances of viewing and cannot be generalised to visual scanpath estimation at dynamic viewing distances. Therefore, the research fills this gap by 1) investigating user's visual attention patterns at different viewing distances, and 2) developing the distance-based visual scanpath estimation model that can generate human-like visual scanpath at different distances. The research also prepares a large-scale eye tracking dataset to support the visual scanpath model and evaluates it with applications such as graphic design and advertisement. The research's main contributions are two-fold. Firstly, it provides the novel visual scanpath estimation model that works at different viewing distances and secondly, draws insights into how effective of the model in different applications.

References

  1. Fosco, C., , Predicting Visual Importance Across Graphic Design Types, in UIST'20. 2020, Association for Computing Machinery: Virtual Event, USA. p. 249–260.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Meur, O.L. and A. Coutrot, How saccadic models help predict where we look during a visual task? Application to visual quality assessment. Electronic Imaging, 2016. 2016(13): p. 1-7.Google ScholarGoogle Scholar
  3. Yun, K., , Exploring the role of gaze behavior and object detection in scene understanding. Frontiers in psychology, 2013. 4: p. 917.Google ScholarGoogle Scholar
  4. Fosco, C., How much time do you have? modeling multi-duration saliency. in CVPR’20. 2020.Google ScholarGoogle ScholarCross RefCross Ref
  5. Borji, A. and L. Itti, Cat2000: A large scale fixation dataset for boosting saliency research. arXiv preprint arXiv:1505.03581, 2015.Google ScholarGoogle Scholar
  6. Judd, T., Learning to predict where humans look. in 2009 IEEE 12th international conference on computer vision. 2009. IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  7. Sun, W., Z. Chen, and F. Wu, Visual Scanpath Prediction Using IOR-ROI Recurrent Mixture Density Network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. 43(6): p. 2101-2118.Google ScholarGoogle ScholarCross RefCross Ref
  8. Todd, J.T. and J.F. Norman, The visual perception of 3-D shape from multiple cues: Are observers capable of perceiving metric structure? Perception & Psychophysics, 2003. 65(1): p. 31-47.Google ScholarGoogle ScholarCross RefCross Ref
  9. Qian, J. and Y. Petrov, A depth illusion supports the model of General Object Constancy: Size and depth constancies related by a same distance-scaling factor. Vision Research, 2016. 129: p. 77-86.Google ScholarGoogle ScholarCross RefCross Ref
  10. Carrasco, M., P.E. Williams, and Y. Yeshurun, Covert attention increases spatial resolution with or without masks: Support for signal enhancement. Journal of vision, 2002. 2(6): p. 4-4.Google ScholarGoogle ScholarCross RefCross Ref
  11. Carrasco, M., C.P. Talgar, and E.L. Cameron, Characterizing visual performance fields: Effects of transient covert attention, spatial frequency, eccentricity, task and set size. Spatial vision, 2001. 15(1): p. 61-75.Google ScholarGoogle ScholarCross RefCross Ref
  12. McCourt, M.E. and M. Garlinghouse, Asymmetries of visuospatial attention are modulated by viewing distance and visual field elevation: Pseudoneglect in peripersonal and extrapersonal space. Cortex, 2000. 36(5): p. 715-731.Google ScholarGoogle ScholarCross RefCross Ref
  13. Tong, S., Visual attention inspired distant view and close-up view classification. in 2016 IEEE International Conference on Image Processing (ICIP). 2016. IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  14. Itti, L., C. Koch, and E. Niebur, A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on pattern analysis and machine intelligence, 1998. 20(11): p. 1254-1259.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Le Meur, O. and Z. Liu, Saccadic model of eye movements for free-viewing condition. Vision research, 2015. 116: p. 152-164.Google ScholarGoogle Scholar
  16. Wang, W., Simulating human saccadic scanpaths on natural images. in CVPR 2011. 2011. IEEE.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Wloka, C., I. Kotseruba, and J.K. Tsotsos. Active fixation control to predict saccade sequences. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.Google ScholarGoogle ScholarCross RefCross Ref
  18. Jiang, M., , Learning to predict sequences of human visual fixations. IEEE transactions on neural networks and learning systems, 2016. 27(6): p. 1241-1252.Google ScholarGoogle ScholarCross RefCross Ref
  19. Malem-Shinitski, N., , A mathematical model of local and global attention in natural scene viewing. PLoS Computational Biology, 2020. 16(12): p. e1007880.Google ScholarGoogle ScholarCross RefCross Ref
  20. Assens, M., , Scanpath and saliency prediction on 360 degree images. Signal Processing: Image Communication, 2018. 69: p. 8-14.Google ScholarGoogle ScholarCross RefCross Ref
  21. Assens, M., PathGAN: Visual scanpath prediction with generative adversarial networks. in Proceedings of the European Conference on Computer Vision (ECCV) Workshops. 2018.Google ScholarGoogle Scholar
  22. Kümmerer, M., M. Bethge, and T.S. Wallis, DeepGaze III: Modeling free-viewing human scanpaths with deep learning. Journal of Vision, 2022. 22(5): p. 7-7.Google ScholarGoogle ScholarCross RefCross Ref
  23. Kümmerer, M., T. Wallis, and M. Bethge, Deepgaze ii: Predicting fixations from deep features over time and tasks. Journal of Vision, 2017. 17(10): p. 1147-1147.Google ScholarGoogle ScholarCross RefCross Ref
  24. Alhashim, I. and P. Wonka, High quality monocular depth estimation via transfer learning. arXiv preprint arXiv:1812.11941, 2018.Google ScholarGoogle Scholar
  25. Cristino, F., , ScanMatch: A novel method for comparing fixation sequences. Behavior research methods, 2010. 42(3): p. 692-700.Google ScholarGoogle Scholar
  26. Dewhurst, R., , It depends on how you look at it: Scanpath comparison in multiple dimensions with MultiMatch, a vector-based approach. Behavior research methods, 2012. 44(4): p. 1079-1100.Google ScholarGoogle Scholar

Index Terms

  1. Distance-based Visual Scanpath Estimation and Applications

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      IUI '23 Companion: Companion Proceedings of the 28th International Conference on Intelligent User Interfaces
      March 2023
      266 pages
      ISBN:9798400701078
      DOI:10.1145/3581754

      Copyright © 2023 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 27 March 2023

      Check for updates

      Qualifiers

      • invited-talk
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate746of2,811submissions,27%
    • Article Metrics

      • Downloads (Last 12 months)49
      • Downloads (Last 6 weeks)3

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format