skip to main content
10.1145/2448556.2448647acmconferencesArticle/Chapter ViewAbstractPublication PagesicuimcConference Proceedingsconference-collections
research-article

Visual attention with contextual saliencies of a scene

Published: 17 January 2013 Publication History

Abstract

This paper presents an examination of the possible competition and cooperation that may take place in human visual attention, between the bottom-up saliencies incurred by photometric signatures and the top-down saliencies incurred by the primary context of a scene. It is found that the strength of the primary context of a scene represents a dominant guiding factor for determining the visual fixations for attention: in the case where there exists a strong context in a scene, the objects and/or regions that are tightly coupled with the context dominate for defining the saliencies that guide fixations for attention. It appears that, in human visual perception, a higher priority is assigned to the efficient understanding of a visual context than the direct response to photometric saliencies not supported by the context. The claims described above are derived from the experimental verification of the following conjectures: 1) There is a tendency for the bottom-up saliencies to be more significant when the context of the scenes observed is either weak or nonexistent. 2) For the scene of a strong context, the top-down context saliencies such as the objects and regions that are associated with understanding the present context tend to dominate over the bottom-up saliencies. 3) When the scene of a strong context includes both positive and negative saliencies, where the positive/negative contextual saliencies are referred to here as those saliencies significant for understanding the context yet well-expected/unexpected for the given context in terms of the prior knowledge, the negative saliencies are assigned a higher priority than the positive saliencies for attention.

References

[1]
Cerf, M., Harel, J., Huth, A., Einhauser, W., and Koch, C. (2008). Decodingg what people see form where they look: Predicting visual stimuli from scanpaths. In L. Paletta & J. K. Tsotso(Eds.), Lecture Notes in Artificial Intelligence, LNAI 539. Heidelberg: Springer-Verlag Berlin.
[2]
Dickinson, S., Christensen, H., Tsotsos, J., and Olofsson, G. (1994). Active object recognition integrating attention and viewpoint control. Proceedings of the Third European Conference on Computer Vision, Stockholm, Sweden.
[3]
Dooseok Kang, Sukhan Lee, and Yu-Bu Lee (2011). Human visual attention with context-specific top-down saliency. Proceedings of IEEE International Conference on Robotics and Biomimetics, 2055--2060.
[4]
Einhäuser, W., Kruse, W., Hoffmann, K., and König, P. (2006). Differences of monkey and human overt attention under natural conditions. Vision Research, 46, 1194--1209.
[5]
Foulsham, T. and Underwood, G. (2008). What can saliency models predict about eye movements? Spatial and sequential aspects of fixations during encoding and recognition. Journal of Vision, 8(2): 6, 1--17.
[6]
Hou, X. and Zhang L. (2007). Saliency detection: A spectral residual approach. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1--8.
[7]
Itti, L., Koch, C., and Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 1254--1259.
[8]
Itti, L., and Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40, 1489--1506.
[9]
Mack, A., Pappas, Z., Silverman, M., and Gay, R. (2002). What we see: Inattention and the capture of attention by meaning. Consciousness and Cognition, 11, 488--506.
[10]
N. Ouerhani, R. von Wartburg, H. Hügli, and R. M. Müri. (2004). Empirical validation of the saliency-based model of visual attention, Electronic Letters on Computer Vision and Image Analysis, 3(1), 13--24.
[11]
Peters, R., Iyer, A., Itti, L., and Koch, C. (2005). Components of bottom--up gaze allocation in natural images. Vision Research, 45, 2397--2416.
[12]
Yu-Bu Lee and Sukhan Lee (2011). Robust face detection based on knowledge-directed specification of bottom-up saliency. ETRI Journal, 33(4) 600--610.

Cited By

View all
  • (2021)To See or Not to SeeProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34481235:1(1-25)Online publication date: 30-Mar-2021

Index Terms

  1. Visual attention with contextual saliencies of a scene

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICUIMC '13: Proceedings of the 7th International Conference on Ubiquitous Information Management and Communication
    January 2013
    772 pages
    ISBN:9781450319584
    DOI:10.1145/2448556
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 17 January 2013

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. bottom up and top down process
    2. context saliency
    3. eye movement
    4. fixations

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    ICUIMC '13
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 251 of 941 submissions, 27%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)2
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 05 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2021)To See or Not to SeeProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34481235:1(1-25)Online publication date: 30-Mar-2021

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media