Skip to main content
Log in

A K2 graph-based fusion model with manifold ranking for robot image saliency detection

  • Regular Paper
  • Published:
Progress in Artificial Intelligence Aims and scope Submit manuscript

Abstract

Saliency detection is a key step in computer vision task, which can extract the interest regions in the image. It is widely used in image compression, image segmentation, object detection, and other fields, which has achieved remarkable results. There are some problems in the traditional image saliency detection algorithms, such as saliency object incomplete detection and inhomogeneity inside saliency object. This paper proposes a K2 graph-based fusion model with manifold ranking for robot image saliency detection. The proposed algorithm regards super-pixel as the node to construct K nearest neighbor graph model and K regular graph model. Manifold ranking algorithm is adopted to calculate the saliency value of super-pixel nodes on the two graph models, respectively. The saliency value of super-pixel nodes in each graph model is executed by a modified weight fusion approach to obtain the final saliency graph. Experiments on the three public data sets MSRA-10 K, SED2 and ECSSD are conducted. The proposed algorithm in this paper is compared with 14 state-of-the-art saliency detection methods. The average AUC and F score are more than 89% and 70%, respectively. The results show that the new algorithm can completely detect saliency objects, and the interior of saliency objects is uniform and smooth.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Yin, S., Li, H.: Hot region selection based on selective search and modified fuzzy C-means in remote sensing images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 13, 5862–5871 (2020). https://doi.org/10.1109/JSTARS.2020.3025582

    Article  Google Scholar 

  2. Goferman, S., Zelnik-Manor, L., Tal, A.: Context-aware saliency detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(10), 1915–1926 (2012)

    Article  Google Scholar 

  3. Yao, X., Han, J., Zhang, D., et al.: Revisiting co-saliency detection: a novel approach based on two-stage multi-view spectral rotation co-clustering. IEEE Trans. Image Process. Publ. IEEE Signal Process. Soc. 26(7), 3196–3209 (2017)

    Article  MathSciNet  Google Scholar 

  4. Xu, M., Jiang, L., Ye, Z., et al.: Bottom-up saliency detection with sparse representation of learnt texture atoms. Pattern Recognit. 60, 348–360 (2016)

    Article  Google Scholar 

  5. Li, X., Lu, H., Zhang, L., et al.: Saliency detection via dense and sparse reconstruction. In: IEEE International Conference on Computer Vision. IEEE (2013)

  6. Yin, S., Meng, L., Liu, J.: A new apple segmentation and recognition method based on modified fuzzy c-means and hough transform. J. Appl. Sci. Eng.. 22(2), 349–354 (2019)

    Google Scholar 

  7. Yan, Q., Xu, L., Shi, J.P., Jia, J.Y.: Hierarchical saliency detection. In: Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, OR, USA: IEEE, 2013. 1155–1162

  8. Cheng, M.M., Mitra, N.J., Huang, X.L., Torr, P.H.S., Hu, S.M.: Global contrast based salient region detection. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 569–582 (2015)

    Article  Google Scholar 

  9. Borji, A., Itti, L.: Exploiting local and global patch rarities for saliency detection. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI, USA: IEEE, 2012. 478–485

  10. Yin, S., Li, H., Teng, L.: Airport detection based on improved faster RCNN in large scale remote sensing images. Sens. Imaging (2020). https://doi.org/10.1007/s11220-020-00314-2

    Article  Google Scholar 

  11. Achanta, R., Säusstrunk, S.: Saliency detection using maximum symmetric surround. In: Proceedings of the 17th IEEE International Conference on Image Processing. Hong Kong, China: IEEE, 2010. 2653–2656

  12. Hou, X.D., Zhang, L.Q.: Saliency detection: a spectral residual approach. In: Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis, Minnesota, USA: IEEE, 2007. 1–8

  13. Wang, J., Lu, H., Li, X., et al.: Saliency detection via background and foreground seed selection. Neurocomputing 152(25), 359–368 (2015)

    Article  Google Scholar 

  14. Kim, J., Han, D., Tai, Y., Kim, J.: Salient region detection via high-dimensional color transform and local spatial support. IEEE Trans. Image Process. 25(1), 9–23 (2016). https://doi.org/10.1109/TIP.2015.2495122

    Article  MathSciNet  MATH  Google Scholar 

  15. Zhang, J., Sclaroff, S.: Exploiting surroundedness for saliency detection: a boolean map approach. IEEE Trans. Pattern Anal. Mach. Intell. 38(5), 889–902 (2016). https://doi.org/10.1109/TPAMI.2015.2473844

    Article  Google Scholar 

  16. Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Proceedings of the 19th International Conference on Neural Information Processing Systems. Canada: ACM, 2006. 545–552

  17. Wei, Y.C., Wen, F., Zhu, W.J., Sun, J.: Geodesic saliency using background priors. In: Proceeding of the 2012 European Conference on Computer Vision, Lecture Notes in Computer Science, Vol. 7574. Springer, Berlin, 2012. 29–42

  18. Li, X., Li, Y., Shen, C., Dick, A., Hengel, A.V. D.: Contextual hypergraph modeling for salient object detection. In: 2013 IEEE International Conference on Computer Vision, Sydney, NSW, 2013, pp. 3328–3335, https://doi.org/10.1109/ICCV.2013.413

  19. Yuan, Y., Li, C., Kim, J., Cai, W., Feng, D.D.: Reversion correction and regularized random walk ranking for saliency detection. IEEE Trans. Image Process. 27(3), 1311–1322 (2018). https://doi.org/10.1109/TIP.2017.2762422

    Article  MathSciNet  MATH  Google Scholar 

  20. Yang, C., Zhang, L., Lu, H., Ruan, X., Yang, M.: Saliency detection via graph-based manifold ranking. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, 2013, pp. 3166–3173, https://doi.org/10.1109/CVPR.2013.407

  21. Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012). https://doi.org/10.1109/TPAMI.2012.120

    Article  Google Scholar 

  22. Zhu, W., Liang, S., Wei, Y., Sun, J.: Saliency optimization from robust background detection. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, 2014, pp. 2814–2821, https://doi.org/10.1109/CVPR.2014.360

  23. Chen, J., Ma, B., Cao, H., et al.: Updating initial labels from spectral graph by manifold regularization for saliency detection. Neurocomputing 266(29), 79–90 (2017)

    Article  Google Scholar 

  24. Li, S., Zeng, C., Fu, Y., et al.: Optimizing multi-graph learning based salient object detection. Signal Process. Image Commun. 55, 93–105 (2017)

    Article  Google Scholar 

  25. Zhang, J., Ehinger, K.A., Wei, H., et al.: A novel graph-based optimization framework for salient object detection. Pattern Recogn. 64, 39–50 (2017)

    Article  Google Scholar 

  26. Liu, T., Sun, J., Zheng, N., Tang, X., Shum, H.: Learning to detect a salient object. In: 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, 2007, pp. 1–8, https://doi.org/10.1109/CVPR.2007.383047

  27. Alpert, S., Galun, M., Brandt, A., Basri, R.: Image segmentation by probabilistic bottom-up aggregation and cue integration. IEEE Trans. Pattern Anal. Mach. Intell. 34(2), 315–327 (2012). https://doi.org/10.1109/TPAMI.2011.130

    Article  Google Scholar 

  28. Guo, Y. C., Pan, W. W., & Yue, X. M.: Image saliency detection based on local and regional features.In: 2012 International Conference on Machine Learning and Cybernetics, Xian, 2012, pp. 1124–1129, https://doi.org/10.1109/ICMLC.2012.6359512

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rui Yang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ye, D., Yang, R. A K2 graph-based fusion model with manifold ranking for robot image saliency detection. Prog Artif Intell 11, 233–250 (2022). https://doi.org/10.1007/s13748-022-00280-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13748-022-00280-8

Keywords

Navigation