Skip to main content
Log in

A Novel Saliency Prediction Method Based on Fast Radial Symmetry Transform and Its Generalization

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

Symmetry has been observed as an important indicator of visual attention. In this paper, we propose a novel saliency prediction method based on fast radial symmetry transform (FRST) and its generalization (GFRST). We made two contributions. First, a novel saliency predictor based on FRST is proposed. The new approach does not require a whole set of visual features (intensity, color, orientation) as in most previous works but uses only symmetry and center bias to model human fixations at the behavioral level. The new model is shown to have higher prediction accuracy and lower computational complexity than an existing saliency prediction method based on symmetry. Second, we propose using GFRST for predicting visual attention. GFRST is shown to outperform FRST, as it can detect symmetries distorted by parallel projection.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. López MT, Fernández-Caballero A, Fernández MA, Mira J, Delgado AE. Visual surveillance by dynamic visual attention method. Pattern Reconit. 2006;39:2194–211.

    Article  Google Scholar 

  2. Begum M, Karray F. Visual attention for robotic cognition: a survey. IEEE Trans Auton Ment Dev. 2011;3(1):92–105.

    Article  Google Scholar 

  3. Harding P, Robertson NM. Visual saliency from image features with application to compression. Cognit Comput. 2013;5(1):76–98.

    Article  Google Scholar 

  4. Li Z, Qin S, Itti L. Visual attention guided bit allocation in video compression. Image Vis Comput. 2011;29:1–14.

    Article  Google Scholar 

  5. Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell. 1998;20(11):1254–9.

    Article  Google Scholar 

  6. Le Meur O, Le Callet P, Barba D, Thoreau D. A coherent computational approach to model bottom-up visual attention. IEEE Trans Pattern Anal Mach Intell. 2006;28(5):802–17.

    Article  PubMed  Google Scholar 

  7. Liang J, Yuen SY. Edge detection with automatic scale selection approach to improve coherent visual attention model. In: IAPR international conference on machine vision applications; 2013.

  8. Kootstra G, de Boer B, Schomaker LRB. Predicting eye fixations on complex visual stimuli using local symmetry. Cognit Comput. 2011;3:223–40.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Reisfeld D, Wolfson H, Yeshurun Y. Context-free attentional operators: the generalized symmetry transform. Int J Comput Vis. 1995;14:119–30.

    Article  Google Scholar 

  10. Heidemann G. Focus-of-attention from local color symmetries. IEEE Trans Pattern Anal Mach Intell. 2004;26(7):817–30.

    Article  PubMed  Google Scholar 

  11. Judd T, Ehinger K, Durand F, Torralba A. Learning to predict where humans look. In: Proceedings of international conference on computer vision; 2009.

  12. Zhang J, Sclaroff S. Saliency detection: a boolean map approach. In: IEEE international conference on computer vision (ICCV); 2013. p. 153–60.

  13. Huang L, Pashler H. A boolean map theory of visual attention. Psychol Rev. 2007;114(3):599.

    Article  PubMed  Google Scholar 

  14. Tünnermann J, Mertsching B. Region-based artificial visual attention in space and time. Cognit Comput. 2014;6(1):125–43.

    Article  Google Scholar 

  15. Erdem E, Erdem A. Visual saliency estimation by nonlinearly integrating features using region covariances. J Vis. 2013;13(4):11.

    Article  PubMed  Google Scholar 

  16. Marat S, Rahman A, Pellerin D, Guyader N, Houzet D. Improving visual saliency by adding ‘face feature map’ and ‘center bias’. Cognit Comput. 2013;5(1):63–75.

    Article  Google Scholar 

  17. Cerf M, Harel J, Einhauser W, Koch C. Predicting human gaze using low-level saliency combined with face detection. In: Platt JC, Koller D, Singer Y, Roweis ST, editors. Advances in neural information processing systems. MIT Press; 2007.

  18. Zhao J, Sun S, Liu X, Sun J, Yang A. A novel biologically inspired visual saliency model. Cognit Comput. 2014;6(4):841–8.

    Article  Google Scholar 

  19. Hershler O, Hochstein S. At first sight: a high-level pop out effect for faces. Vis Res. 2005;45(13):1707–24.

    Article  PubMed  Google Scholar 

  20. Van Rullen R. On second glance: still no high-level pop-out effect for faces. Vis Res. 2006;46(18):3017–27.

    Article  Google Scholar 

  21. Palmer SE, Hemenway K. Orientation and symmetry: effects of multiple, rotational, and near symmetries. J Exp Psychol Hum Percept Perform. 1978;4(4):691–702.

    Article  CAS  PubMed  Google Scholar 

  22. Kaufman L, Richards W. Spontaneous fixation tendencies for visual forms. Percept Psychophys. 1969;5(2):85–8.

    Article  Google Scholar 

  23. Zhou X, Chu H, Li X, Zhan Y. Center of mass attracts attention. Neuroreport. 2006;17(1):85–8.

    Article  CAS  PubMed  Google Scholar 

  24. Orabona F, Metta G, Sandini G. A proto-object based visual attention model. Attention in cognitive systems. Theories and systems from an interdisciplinary viewpoint. Berlin: Springer; 2008.

    Google Scholar 

  25. Sun Y. Hierarchical object-based visual attention for machine vision. Ph.D. Thesis. School of Informatics, University of Edinburgh; 2003.

  26. Bindemann M, Scheepers C, Burton AM. Viewpoint and center of gravity affect eye movements to human faces. J Vis. 2009;9(2):7.

    Article  PubMed  Google Scholar 

  27. Coren S, Hoenig P. Effect of non-target stimuli upon length of voluntary saccades. Percept Mot Skills. 1972;34(2):499–508.

    Article  CAS  PubMed  Google Scholar 

  28. Findlay JM. Local and global influences on saccadic eye movements. In: Fisher DE, Monty RA, Senders JW, editors. Eye movements: cognition and visual perception. Hillsdale: Lawrence Erlbaum; 1981.

    Google Scholar 

  29. Findlay JM. Global visual processing for saccadic eye movements. Vis Res. 1982;22(8):1033–45.

    Article  CAS  PubMed  Google Scholar 

  30. Findlay JM, Gilchrist ID. Spatial scale and saccade programming. Perception. 1997;26(9):1159–67.

    Article  CAS  PubMed  Google Scholar 

  31. He PY, Kowler E. The role of location probability in the programming of saccades: implications for “center-of-gravity” tendencies. Vis Res. 1989;29(9):1165–81.

    Article  CAS  PubMed  Google Scholar 

  32. Harel J, Koch C, Perona P. Graph-based visual saliency. In: Advances in neural information processing systems; 2006. p. 545–52.

  33. Goferman S, Zelnik-Manor L, Tal A. Context-aware saliency detection. IEEE Trans Pattern Anal Mach Intell. 2012;34(10):1915–26.

    Article  PubMed  Google Scholar 

  34. Bruce NDB, Tsotsos JK. Saliency based on information maximization. Adv Neural Inf Process Syst. 2006;18:155–62.

    Google Scholar 

  35. Rahtu E, Kannala J, Salo M, Heikkilä J. Segmenting salient objects from images and videos. In: Computer Vision–ECCV 2010. Springer, Berlin, Heidelberg; 2010. p. 366–79.

  36. Zhang L, Tong M, Marks T, Shan H, Cottrell G. SUN: a Bayesian framework for saliency using natural statistics. J Vis. 2008;8(7):32.

    Article  PubMed  Google Scholar 

  37. Hou X, Zhang L. Saliency detection: a spectral residual approach. In: IEEE conference on computer vision and pattern recognition (CVPR); 2007.

  38. Achanta R, Hemami S, Estrada F, Susstrunk S. Frequency-tuned salient region detection. In: IEEE conference on computer vision and pattern recognition. Miami, FL; 2009. p. 1597–604.

  39. Yuen SY. Shape from contour using symmetries. Lect Notes Comput Sci. 1990;427:437–53.

    Article  Google Scholar 

  40. Loy G, Zelinsky A. Fast radial symmetry for detecting points of interest. IEEE Trans Pattern Anal Mach Intell. 2003;25(8):959–73.

    Article  Google Scholar 

  41. Ni J, Singh MK, Bahlmann C. Fast radial symmetry detection under affine transformations. In: Mortensen E, editor. Computer vision and pattern recognition (CVPR); 2012.

  42. Le Meur O, Castellan X, Le Callet P, Barba D. Efficient saliency-based repurposing method. In: IEEE international conference on image processing; 2006. p. 421–24.

  43. Loy G. Computer vision to see people: a basis for enhanced human computer interaction. Ph.D. thesis, Department of Systems Engineering, Aust Natl Univ; 2003.

  44. Borji A, Itti L. State-of-the-art in visual attention modeling. IEEE Trans Pattern Anal Mach Intell. 2013;35(1):185–207.

    Article  PubMed  Google Scholar 

  45. Gao D, Vasconcelos N. Bottom-up saliency is a discriminant process. In: IEEE 11th international conference on computer vision (ICCV); 2007.

  46. Kienzle W, Wichmann FA, Franz MO, Schölkopf B. A nonparametric approach to bottom-up visual saliency. In: Advances in neural information processing systems; 2006. p. 689–96.

  47. Le Meur O, Baccino T. Methods for comparing scanpaths and saliency maps: strengths and weaknesses. Behav Res Methods. 2012;45(1):251–66.

    Article  Google Scholar 

  48. Zhao Q, Koch C. Learning a saliency map using fixated locations in natural scenes. J Vis. 2011;11(3):9.

    Article  PubMed  Google Scholar 

  49. Li J, Levine MD, An X, Xu X, He H. Visual saliency based on scale-space analysis in the frequency domain. IEEE Trans Pattern Anal Mach Intell. 2013;35(4):996–1010.

    Article  PubMed  Google Scholar 

  50. Borji A, Sihite DN, Itti L. Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study. IEEE Trans Image Process. 2013;22(1):55–69.

    Article  PubMed  Google Scholar 

  51. Sun C. Fast stereo matching using rectangular subregioning and 3D Maximum-surface techniques. Int J Comput Vis. 2002;47:99–117.

    Article  Google Scholar 

  52. Zitova B, Flusser J. Image registration methods: a survey. Image Vis Comput. 2003;21:977–1000.

    Article  Google Scholar 

  53. Liang J, Yuen SY. An edge detection with automatic scale selection approach to improve coherent visual attention model. Pattern Recognit Lett. 2013;34(13):1519–24.

    Article  Google Scholar 

  54. Ouerhani N, Von Wartburg R, Hügli H, Müri R. Empirical validation of the saliency-based model of visual attention. Electro Lett Comput Vis Image Anal. 2004;3(1):13–24.

    Google Scholar 

  55. Le Meur O, Le Callet P, Barba D. Predicting visual fixations on video based on low-level visual features. Vis Res. 2007;47(19):2483–98.

    Article  PubMed  Google Scholar 

  56. Mancas M. Computational attention modelisation and application to audio and image processing. Ph.D. thesis ; 2007.

  57. Rajashekar U, Van Der Linde I, Bovik AC, Cormack LK. GAFFE: a gaze-attentive fixation finding engine. IEEE Trans Image Process. 2008;17(4):564–73.

    Article  CAS  PubMed  Google Scholar 

  58. Pele O, Werman M. Fast and robust earth mover’s distances. In: IEEE 12th international conference on computer vision; 2009. p. 460–67.

  59. Judd T, Durand F, Torralba A. A benchmark of computational models of saliency to predict human fixations. MIT technical report; 2012

  60. Riche N, Duvinage M, Mancas M, Gosselin B, Dutoit T. Saliency and human fixations: state-of-the-art and study of comparison metrics. In: IEEE international conference on computer vision (ICCV); 2013.

Download references

Acknowledgments

The work described in this paper was supported by a Research Studentship and a grant from CityU (Project No. 7004240). We thank Mr. Yang Lou for proofreading the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shiu Yin Yuen.

Ethics declarations

Conflict of Interest

Jiayu Liang and Shiu Yin Yuen declare that they have no conflict of interest.

Informed Consent

All procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2008 (5). Additional informed consent was obtained from all patients for which identifying information is included in this article.

Human and Animal Rights

This article does not contain any studies with human participants or animals performed by any of the authors.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liang, J., Yuen, S.Y. A Novel Saliency Prediction Method Based on Fast Radial Symmetry Transform and Its Generalization. Cogn Comput 8, 693–702 (2016). https://doi.org/10.1007/s12559-016-9406-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-016-9406-8

Keywords

Navigation