Skip to main content
Log in

A Functional and Statistical Bottom-Up Saliency Model to Reveal the Relative Contributions of Low-Level Visual Guiding Factors

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

When looking at a scene, we frequently move our eyes to place consecutive interesting regions on the fovea, the retina centre. At each fixation, only this specific foveal region is analysed in detail by the visual system. The visual attention mechanisms control eye movements and depend on two types of factor: bottom-up and top-down factors. Bottom-up factors include different visual features such as colour, luminance, edges, and orientations. In this paper, we evaluate quantitatively the relative contribution of basic low-level features as candidate guiding factors to visual attention and hence to eye movements. We also study how these visual features can be combined in a bottom-up saliency model. Our work consists of three interactive parts: a functional saliency model, a statistical model and eye movement data recorded during free viewing of natural scenes. The functional saliency model, inspired by the primate visual system, decomposes a visual scene into different feature maps. The statistical model indicates which features best explain the recorded eye movements. We show an essential role of high frequency luminance and an important contribution of central fixation bias. The relative contribution of features, calculated by the statistical model, is then used to combine the different feature maps into a saliency map. Finally, the comparison between the saliency model and experimental data confirmed the influence of these contributions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Koch C, Ullman S. Shifts in selective visual attention: towards the underlying neural circuitry. Hum Neurobiol. 1985;4(4):219–27.

    CAS  PubMed  Google Scholar 

  2. Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell. 1998;20(11):1254–9.

    Article  Google Scholar 

  3. Yarbus AL. Eye-movements and vision. New York: Plenum Press; 1967.

    Google Scholar 

  4. Navalpakkam V, Itti L. Modeling the influence of task on attention. Vis Res. 2005;45(2):205–31.

    Article  PubMed  Google Scholar 

  5. Torralba A, Oliva A, Castelhano MS, Henderson JM. Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychol Rev. 2006;113(4):766–86.

    Article  PubMed  Google Scholar 

  6. Treisman AM, Gelade G. A feature-integration theory of attention. Cogn Psychol. 1980;12(1):97–136.

    Article  CAS  PubMed  Google Scholar 

  7. Wolfe JM, Horowitz TS. What attributes guide the deployment of visual attention and how do they do it? Nat Rev Neurosci. 2004;5:1–7.

    Article  Google Scholar 

  8. Le Meur O, Le Callet P, Barba D, Thoreau D. A coherent computational approach to model bottom-up visual attention. IEEE Trans Pattern Anal Mach Intell. 2006;28(5):802–17.

    Article  PubMed  Google Scholar 

  9. Tatler BW. The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions. J Vis. 2007;7(14):1–17.

    Article  PubMed  Google Scholar 

  10. Tseng PH, Carmi R, Cameron IGM, Munoz DP, Itti L. Quantifying center bias of observers in free viewing of dynamic natural scenes. J Vis. 2009;9(7):1–16.

    Article  Google Scholar 

  11. Wolfe JM. Guided search 2.0: a revised model of visual search. Psychonomic Bull Rev. 1994;1(2):202–38.

    Google Scholar 

  12. Dacey DM. Circuitry for color coding in the primate retina. Proc Natl Acad Sci USA. 1996;93:582–8.

    Article  CAS  PubMed  Google Scholar 

  13. Dacey DM, Packer OS. Colour coding in the primate retina: diverse cell types and cone-specific circuitry. Curr Opin Neurobiol. 2003;13:421–7.

    Article  CAS  PubMed  Google Scholar 

  14. Chatterjee S, Callaway EM. Parallel colour-opponent pathways to primary visual cortex. Nature. 2003;426:668–71.

    Article  CAS  PubMed  Google Scholar 

  15. Mannos JL, Sakrison DJ. The effects of a visual fidelity criterion on the encoding of images. IEEE Trans Inf Theory. 1974;20(4):525–35.

    Article  Google Scholar 

  16. Mullen KT. The contrast sensitivity of human colour vision to red-green and blue-yellow chromatic gratings. J Physiol. 1985;359:381–400.

    CAS  PubMed  Google Scholar 

  17. Hérault J. De la rétine biologique aux circuit neuromorphiques. In: Jolion JM, editor. Les Systèmes de Vision. Hermès; 2001. pp. 55–95.

  18. Ho-Phuoc T, Guérin-Dugué A, Guyader N. A biologically-inspired visual saliency model to test different strategies of saccade programming. In: Fred A, Filipe J, Gamboa H, editors. CCIS 2010; 52:187–199.

  19. Marat S, Ho Phuoc T, Granjon L, Guyader N, Pellerin D, Guérin-Dugué A. Modelling spatio-temporal saliency to predict gaze direction for short videos. Int J Comput Vis. 2009;82(3):231–43.

    Article  Google Scholar 

  20. Kandel ER, Schwartz JH, Jessell TM. Principles of neural science. 4th ed. New York: McGraw-Hill; 2000.

    Google Scholar 

  21. Li Z, May KA. Psychophysical tests of the hypothesis of a bottom-up saliency map in primary visual cortex. PLoS Comput Biol. 2007;3(4):e62.

    Article  Google Scholar 

  22. Field DJ. Relations between the statistics of natural images and the response properties of cortical cells. J Opt Soc Am A. 1987;4(12):2379–94.

    Article  CAS  PubMed  Google Scholar 

  23. Knutsson H, Westin CF, Granlund G. Local multiscale frequency and bandwidth estimation. Proc Int Conf Image Proc. 1994;1:36–40.

    Google Scholar 

  24. DeValois RL, Albrecht DG, Thorell LG. Spatial frequency selectivity of cells in macaque visual cortex. Vis Res. 1982;22(5):545–59.

    Article  CAS  Google Scholar 

  25. Baddeley R. The correlational structure of natural images and the calibration of spatial representations. Cogn Sci. 1997;21:351–72.

    Article  Google Scholar 

  26. Oliva A, Torralba A. Modeling the shape of the scene: a holistic representation of the spatial envelope. Int J Comput Vis. 2001;42:145–75.

    Article  Google Scholar 

  27. Switkes E, Mayer MJ, Sloan JA. Spatial frequency analysis of the visual environment: anisotropy and the carpentered environment hypothesis. Vis Res. 1978;8:1393–9.

    Article  Google Scholar 

  28. DeAngelis GC, Robson JG, Ohzawa I, Freeman RD. Organization of suppression in receptive-fields of neurons in cat visual-cortex. J Neurophysiol. 1992;68:144–63.

    CAS  PubMed  Google Scholar 

  29. Das A, Gilbert CD. Topography of contextual modulations mediated by short-range interactions in primary visual cortex. Nature. 1999;399:655–61.

    Article  CAS  PubMed  Google Scholar 

  30. Ts’o DY, Gilbert CD, Wiessel TN. Relationships between horizontal interactions and functional architecture in cat striate cortex as revealed by cross-correlation analysis. J Neurosci. 1986;6:1160–70.

    PubMed  Google Scholar 

  31. Zenger B, Braun J, Koch C. Attentional effects on contrast detection in the presence of surround masks. Vis Res. 2000;40(27):3717–24.

    Article  CAS  PubMed  Google Scholar 

  32. Vincent BT, Baddeley R, Correani A, Troscianko T, Leonards U. Do we look at lights? Using mixture modelling to distinguish between low- and high-level factors in natural image viewing. Vis Cogn. 2009;17:856–79.

    Article  Google Scholar 

  33. Couronné T, Guérin-Dugué A, Dubois M, Faye P, Marendaz C. A statistical mixture method to reveal bottom-up and top-down factors guiding the eye-movements. J Eye Mov Res. 2010;3(2):1–13.

    Google Scholar 

  34. Dempster AP, Laird NM, Rubin DB. Maximum likelihood from incomplete data via the em algorithm. J Roy Stat Soc. 1977;39(1):1–38.

    Google Scholar 

  35. Peters RJ, Iyer A, Itti L, Koch C. Components of bottom-up gaze allocation in natural images. Vis Res. 2005;45(18):2397–416.

    Article  PubMed  Google Scholar 

  36. Frey HP, Honey C, König P. What’s color got to do with it? The influence of color on visual attention in different categories. J Vis. 2008;8(14):1–17.

    Article  PubMed  Google Scholar 

  37. Reinagel P, Zador AM. Natural scene statistics at the centre of gaze. Network. 1999;10(4):341–50.

    Article  CAS  PubMed  Google Scholar 

  38. Parkhurst DJ, Law K, Niebur E. Modeling the role of salience in the allocation of overt visual attention. Vis Res. 2002;42(1):107–23.

    Article  PubMed  Google Scholar 

  39. Parkhurst DJ, Niebur E. Texture contrast attracts overt visual attention in natural scenes. Eur J Neurosci. 2004;19(3):783–9.

    Article  PubMed  Google Scholar 

  40. Baddeley RJ, Tatler BW. High frequency edges (but not contrast) predict where we fixate: a Bayesian system identification analysis. Vis Res. 2006;46(18):2824–33.

    Article  PubMed  Google Scholar 

  41. Li Z. A saliency map in primary visual cortex. Trends Cogn Sci. 2002;6(1):9–16.

    Article  PubMed  Google Scholar 

  42. Jost T, Ouerhani N, Wartburg RV, Müri R, Hügli H. Assessing the contribution of color in visual attention. CVIU. 2005;100(1–2):107–23.

    Google Scholar 

  43. Peters RJ, Itti L. Applying computational tools to predict gaze direction in interactive visual environments. ACM Trans Appl Percept. 2008;5(2):1–21.

    Article  Google Scholar 

  44. Gegenfurtner KR, Rieger J. Sensory and cognitive contributions of color to the recognition of natural scenes. Curr Biol. 2000;10(13):805–8.

    Article  CAS  PubMed  Google Scholar 

  45. Hansen T, Olkkonen M, Walter S, Gegenfurtner KR. Memory modulates color appearance. Nat Neurosci. 2006;9:1367–8.

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgments

This work is partially supported by grants from the Rhône-Alpes Region with the LIMA project. T. Ho-Phuoc’s Ph.D. is funded by the French MESR. We would like to thank G. Ionescu (GIPSA-Lab) for the experimental setup and S. Achard (GIPSA-Lab) for the fruitful discussion on bootstrap estimate.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tien Ho-Phuoc.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ho-Phuoc, T., Guyader, N. & Guérin-Dugué, A. A Functional and Statistical Bottom-Up Saliency Model to Reveal the Relative Contributions of Low-Level Visual Guiding Factors. Cogn Comput 2, 344–359 (2010). https://doi.org/10.1007/s12559-010-9078-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-010-9078-8

Keywords

Navigation