Skip to main content

Cellular Automata Based on Occlusion Relationship for Saliency Detection

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2016)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9983))

Abstract

Different from the traditional images, 4D light field images contain the scene structure information and have been proved that can better obtain the saliency. Instead of estimating depth or using the unique refocusing capability, we proposed to obtain the occlusion relationship from the raw image to calculate saliency detection. The occlusion relationship is calculated using the Epipolar Plane Image (EPI) from the raw light field image which can distinguish a region is most likely a foreground or background. By analyzing the occlusion relationship in the scene, true edges of objects can be selected from the surface textures of objects, which is effective to segment the object completely. Moreover, we assume that objects which are non-occluded are more likely to be the foreground and objects that are occluded by lots of objects are background. Then the occlusion relationship is integrated into a modified saliency detection framework to obtain the saliency regions. Experiment results demonstrate that the occlusion relationship can help to improve the saliency detection accuracy, and the proposed method achieves significantly higher accuracy and robustness in comparison with state-of-the-art light field saliency detection methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Rother, C., Kolmogorov, V., Blake, A.: GrabCut: interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 23(3), 307–312 (2004)

    Article  Google Scholar 

  2. Borji, A., Sihite, D.N., Itti, L.: Salient object detection: a benchmark. IEEE Trans. Image Process. 24(12), 414–429 (2015)

    Article  MathSciNet  Google Scholar 

  3. Sun, J., Ling, H.: Scale and object aware image retargeting for thumbnail browsing. 2011 IEEE International Conference on Computer Vision (ICCV), pp. 1511–1518. IEEE (2011)

    Google Scholar 

  4. Ding, Y., Xiao, J., Yu, J.: Importance filtering for image retargeting. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 89–96. IEEE (2011)

    Google Scholar 

  5. Cheng, M.-M., Warrell, J., Lin, W.-Y., Zheng, S., Vineet, V., Crook, N.: Efficient salient region detection with soft image abstraction. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1529–1536 (2013)

    Google Scholar 

  6. Jiang, H., Wang, J., Yuan, Z., Yang, W., Zheng, N., Li, S.: Salient object detection: a discriminative regional feature integration approach. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2083–2090 (2013)

    Google Scholar 

  7. Jiang, Z., Davis, L.S.: Submodular salient region detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2043–2050 (2013)

    Google Scholar 

  8. Wei, Y., Wen, F., Zhu, W., Sun, J.: Geodesic saliency using background priors. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7574, pp. 29–42. Springer, Heidelberg (2012). doi:10.1007/978-3-642-33712-3_3

    Chapter  Google Scholar 

  9. Li, N., Ye, J., Ji, Y., Ling, H., Yu, J.: Saliency detection on light field. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2806–2813 (2014)

    Google Scholar 

  10. Yan, Q., Xu, L., Shi, J., Jia, J.: Hierarchical saliency detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1155–1162 (2013)

    Google Scholar 

  11. Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Computer Science Technical Report CSTR, vol. 2, no. 11, pp. 1–11 (2005)

    Google Scholar 

  12. Qin, Y., Lu, H., Xu, Y., Wang, H.: Saliency detection via cellular automata. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 110–119 (2015)

    Google Scholar 

  13. Achanta, R., Estrada, F., Wils, P., Süsstrunk, S.: Salient region detection and segmentation. In: Gasteratos, A., Vincze, M., Tsotsos, J.K. (eds.) ICVS 2008. LNCS, vol. 5008, pp. 66–75. Springer, Heidelberg (2008). doi:10.1007/978-3-540-79547-6_7

    Chapter  Google Scholar 

  14. Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, pp. 1597–1604. IEEE (2009)

    Google Scholar 

  15. Bruce, N., Tsotsos, J.: Saliency based on information maximization. In: Advances in Neural Information Processing Systems, pp. 155–162 (2005)

    Google Scholar 

  16. Zhai, Y., Shah, M.: Visual attention detection in video sequences using spatiotemporal cues. In: Proceedings of the 14th ACM International Conference on Multimedia, pp. 815–824. ACM (2006)

    Google Scholar 

  17. Itti, L., Koch, C., Niebur, E., et al.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  18. Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Advances in Neural Information Processing Systems, pp. 545–552 (2006)

    Google Scholar 

  19. Rahtu, E., Kannala, J., Salo, M., Heikkilä, J.: Segmenting salient objects from images and videos. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6315, pp. 366–379. Springer, Heidelberg (2010). doi:10.1007/978-3-642-15555-0_27

    Chapter  Google Scholar 

  20. Xie, Y., Lu, H.: Visual saliency detection based on bayesian model. In: 2011 18th IEEE International Conference on Image Processing (ICIP), pp. 645–648. IEEE (2011)

    Google Scholar 

  21. Xie, Y., Huchuan, L., Yang, M.-H.: Bayesian saliency via low and mid level cues. IEEE Trans. Image Process. 22(5), 1689–1698 (2013)

    Article  MathSciNet  Google Scholar 

  22. Sun, J., Lu, H., Li, S.: Saliency detection based on integration of boundary and soft-segmentation. In: 2012 19th IEEE International Conference on Image Processing, pp. 1085–1088. IEEE (2012)

    Google Scholar 

  23. Perazzi, F., Krähenbühl, P., Pritch, Y., Hornung, A.: Saliency filters: contrast based filtering for salient region detection. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 733–740. IEEE (2012)

    Google Scholar 

  24. Shen, X., Wu, Y.: A unified approach to salient object detection via low rank matrix recovery. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 853–860. IEEE (2012)

    Google Scholar 

  25. Goferman, S., Zelnik-Manor, L., Tal, A.: Context-aware saliency detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(10), 1915–1926 (2012)

    Article  Google Scholar 

  26. Klein, D.A., Frintrop, S.: Center-surround divergence of feature statistics for salient object detection. In: 2011 International Conference on Computer Vision, pp. 2214–2219. IEEE (2011)

    Google Scholar 

  27. Yang, C., Zhang, L., Lu, H., Ruan, X., Yang, M.-H.: Saliency detection via graph-based manifold ranking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3166–3173 (2013)

    Google Scholar 

  28. Zhang, X., Wang, Z., Yan, C., Zou, H., Peng, Q., Jiang, X., Dan, W.U.: Animal Nutrition Institute, and Sichuan Agricultural University, “Saliency optimization from robust background detection”. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2814–2821 (2014)

    Google Scholar 

  29. Zhang, L., Tong, M.H., Marks, T.K., Shan, H., Cottrell, G.W.: Sun: a bayesian framework for saliency using natural statistics. J. Vis. 8(7), 32–32 (2008)

    Article  Google Scholar 

  30. Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look, vol. 30, no. 2, pp. 2106–2113 (2009)

    Google Scholar 

  31. Yang, J.: Top-down visual saliency via joint CRF and dictionary learning. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Proceedings/CVPR. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 157, no. 10, pp. 1–1 (2012)

    Google Scholar 

  32. Hou, X., Zhang, L.: Saliency detection: a spectral residual approach. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007)

    Google Scholar 

  33. Cheng, M., Zhang, G., Mitra, N.J., Huang, X., Hu, S.: Global contrast based salient region detection. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 409–416 (2015)

    Article  Google Scholar 

  34. Cho, D., Lee, M., Kim, S., Tai, Y.W.: Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction. In: Proceedings of the 2013 IEEE International Conference on Computer Vision, pp. 3280–3287 (2013)

    Google Scholar 

  35. Radhakrishna, A., Shaji, A., Smith, K., Lucchi, A., Fua, P., Susstrunk, S.: Slic superpixels, Dept. School Comput. Commun. Sci., EPFL, Lausanne, Switzerland. Technical report 149300 (2010)

    Google Scholar 

  36. Cheng, M.-M., Mitra, N.J., Huang, X., Torr, P.H.S., Hu, S.-M.: Global contrast based salient region detection. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 569–582 (2015)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hao Sheng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Sheng, H., Feng, W., Zhang, S. (2016). Cellular Automata Based on Occlusion Relationship for Saliency Detection. In: Lehner, F., Fteimi, N. (eds) Knowledge Science, Engineering and Management. KSEM 2016. Lecture Notes in Computer Science(), vol 9983. Springer, Cham. https://doi.org/10.1007/978-3-319-47650-6_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-47650-6_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-47649-0

  • Online ISBN: 978-3-319-47650-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics