Skip to main content

Automatic Foreground Seeds Discovery for Robust Video Saliency Detection

  • Conference paper
  • First Online:
  • 2296 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10736))

Abstract

In this paper, we propose a novel algorithm for saliency object detection in unconstrained videos. Even though various methods have been proposed to solve this task, video saliency detection is still challenging due to the complication in object discovery as well as the utilization of motion cues. Most of existing methods adopt background prior to detect salient objects. However, they are prone to fail in the case that foreground objects are similar with the background. In this work, we aim to discover robust foreground priors as a complement to background priors so that we can improve the performance. Given an input video, we consider motion and appearance cues separately to generate initial foreground/background seeds. Then, we learn a global object appearance model using the initial seeds and remove unreliable seeds according to foreground likelihood. Finally, the seeds work as queries to rank all the superpixels in images to generate saliency maps. Experimental results on challenging public dataset demonstrate the advantage of our algorithm over state-of-the-art algorithms.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: Slic superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)

    Article  Google Scholar 

  2. Arbeláez, P., Pont-Tuset, J., Barron, J., Marques, F., Malik, J.: Multiscale combinatorial grouping. In: Computer Vision and Pattern Recognition (2014)

    Google Scholar 

  3. Borji, A., Cheng, M.M., Jiang, H., Li, J.: Salient object detection: a survey (2014). arXiv preprint: arXiv:1411.5878

  4. Born, R., Groh, J., Zhao, R., Lukasewycz, S.: Segregation of object and background motion in visual area MT: effects of microstimulation on eye movements. Neuron 26(3), 725–734 (2000)

    Article  Google Scholar 

  5. Fang, Y., Chen, Z., Lin, W., Lin, C.W.: Saliency detection in the compressed domain for adaptive image retargeting. IEEE Trans. Image Process. 21(9), 3888–3901 (2012)

    Article  MathSciNet  Google Scholar 

  6. Fang, Y., Wang, Z., Lin, W., Fang, Z.: Video saliency incorporating spatiotemporal cues and uncertainty weighting. IEEE Trans. Image Process. 23(9), 3910–3921 (2014)

    Article  MathSciNet  Google Scholar 

  7. Fu, H., Cao, X., Tu, Z.: Cluster-based co-saliency detection. IEEE Trans. Image Process. 22(10), 3766–3778 (2013)

    Article  MathSciNet  Google Scholar 

  8. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Computer Vision and Pattern Recognition (2014)

    Google Scholar 

  9. Gong, Y., Lazebnik, S., Gordo, A., Perronnin, F.: Iterative quantization: a procrustean approach to learning binary codes for large-scale image retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 35(12), 2916–2929 (2013)

    Article  Google Scholar 

  10. Guo, J., Li, Z., Cheong, L.F., Zhou, S.Z.: Video co-segmentation for meaningful action extraction. In: 2013 IEEE International Conference on Computer Vision (ICCV), pp. 2232–2239 (2013)

    Google Scholar 

  11. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  12. Ko, B.C., Nam, J.Y.: Object-of-interest image segmentation based on human attention and semantic region clustering. JOSA A 23(10), 2462–2470 (2006)

    Article  Google Scholar 

  13. Li, G., Yu, Y.: Deep contrast learning for salient object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 478–487 (2016)

    Google Scholar 

  14. Perazzi, F., Pont-Tuset, J., McWilliams, B., Van Gool, L., Gross, M., Sorkine-Hornung, A.: A benchmark dataset and evaluation methodology for video object segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 724–732 (2016)

    Google Scholar 

  15. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014)

    Google Scholar 

  16. Sundaram, N., Brox, T., Keutzer, K.: Dense point trajectories by GPU-accelerated large displacement optical flow. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part I. LNCS, vol. 6311, pp. 438–451. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15549-9_32

    Chapter  Google Scholar 

  17. Wang, W., Shen, J., Porikli, F.: Saliency-aware geodesic video object segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3395–3402 (2015)

    Google Scholar 

  18. Wang, W., Shen, J., Shao, L.: Consistent video saliency using local gradient flow optimization and global refinement. IEEE Trans. Image Process. 24(11), 4185–4196 (2015)

    Article  MathSciNet  Google Scholar 

  19. Xi, T., Zhao, W., Wang, H., Lin, W.: Salient object detection with spatiotemporal background priors for video. IEEE Trans. Image Process. 26, 3425–3436 (2017)

    Article  MathSciNet  Google Scholar 

  20. Xue, Y., Guo, X., Cao, X.: Motion saliency detection using low-rank and sparse decomposition. In: 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1485–1488. IEEE (2012)

    Google Scholar 

  21. Yang, C., Zhang, L., Lu, H., Ruan, X., Yang, M.H.: Saliency detection via graph-based manifold ranking. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2013

    Google Scholar 

  22. Yu, H., Li, J., Tian, Y., Huang, T.: Automatic interesting object extraction from images using complementary saliency maps. In: Proceedings of the 18th ACM International Conference on Multimedia, pp. 891–894. ACM (2010)

    Google Scholar 

  23. Zhou, D., Weston, J., Gretton, A., Bousquet, O., Schölkopf, B.: Ranking on data manifolds. In: NIPS, vol. 3 (2003)

    Google Scholar 

  24. Zhou, T., Lu, Y., Di, H., Zhang, J.: Video object segmentation aggregation. In: 2016 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Lin Zhang , Yao Lu or Tianfei Zhou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, L., Lu, Y., Zhou, T. (2018). Automatic Foreground Seeds Discovery for Robust Video Saliency Detection. In: Zeng, B., Huang, Q., El Saddik, A., Li, H., Jiang, S., Fan, X. (eds) Advances in Multimedia Information Processing – PCM 2017. PCM 2017. Lecture Notes in Computer Science(), vol 10736. Springer, Cham. https://doi.org/10.1007/978-3-319-77383-4_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-77383-4_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-77382-7

  • Online ISBN: 978-3-319-77383-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics