Skip to main content

Self-adapted Frame Selection Module: Refine the Input Strategy for Video Saliency Detection

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 13156))

Abstract

Video saliency detection is intended to interpret the human visual system by modeling and predicting while observing a dynamic scene. This method is currently widely used in a variety of devices, including surveillance cameras and Internet-of-Things sensors. Traditionally, each video contains a large amount of redundancies in consecutive frames, while the common practices concentrate on extending the range of input frames to resist the uncertainty of input images. In order to overcome this problem, we propose Self-Adapted Frame Selection (SAFS) module that removes redundant information and selects frames that are highly informative. Furthermore, the module has high robustness and extensive application dealing with complex video contents, such as fast moving scene and images from different scenes. Since predicting the saliency map across multiple scenes is challenging, we establish a set of benchmarking videos for the scene change scenario. Specifically, our method combined with TASED-NET achieves significant improvements on the DHF1K dataset as well as the scene change dataset.

The above work was supported in part by grants from The Natural Science Foundation of Fujian Province of China (No. 2020J06023), the National Natural Science Foundation of China (NSFC) under Grant No. 62172046, the Special Project of Guangdong Provincial Department of Education in Key Fields of Colleges and Universities (2021ZDZX1063); the joint project of Production, Teaching and Research of Zhuhai (ZH22017001210133PWC).

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Wang, T., et al.: Privacy-enhanced data collection based on deep learning for internet of vehicles. IEEE Trans. Ind. Inf. 16(10), 6663–6672 (2019)

    Article  Google Scholar 

  2. Qu, Y., Xiong, N.: RFH: a resilient, fault-tolerant and high-efficient replication algorithm for distributed cloud storage. In: 2012 41st International Conference on Parallel Processing, pp. 520–529. IEEE (2012)

    Google Scholar 

  3. Yin, J., Lo, W., Deng, S., Li, Y., Zhaohui, W., Xiong, N.: Colbar: a collaborative location-based regularization framework for QoS prediction. Inf. Sci. 265, 68–84 (2014)

    Article  MathSciNet  Google Scholar 

  4. Wang, T., Jia, W., Xing, G., Li, M.: Exploiting statistical mobility models for efficient Wi-Fi deployment. IEEE Trans. Veh. Technol. 62(1), 360–373 (2012)

    Article  Google Scholar 

  5. Fang, W., Yao, X., Zhao, X., Yin, J., Xiong, N.: A stochastic control approach to maximize profit on service provisioning for mobile cloudlet platforms. IEEE Trans. Syst. Man Cybern. Syst. 48(4), 522–534 (2016)

    Article  Google Scholar 

  6. Huang, M., Liu, A., Wang, T., Huang, C.: Green data gathering under delay differentiated services constraint for Internet of Things. Wirel. Commun. Mob. Comput. 2018, 23 (2018)

    Google Scholar 

  7. Zeng, Y., Xiong, N., Park, J.H., Zheng, G.: An emergency-adaptive routing scheme for wireless sensor networks for building fire hazard monitoring. Sensors 10(6), 6128–6148 (2010)

    Article  Google Scholar 

  8. Li, H., Liu, J., Liu, R.W., Xiong, N., Wu, K., Kim, T.: A dimensionality reduction-based multi-step clustering method for robust vessel trajectory analysis. Sensors 17(8), 1792 (2017)

    Article  Google Scholar 

  9. Ferreira, J.F., Dias, J.: Attentional mechanisms for socially interactive robots - a survey. IEEE Trans. Auton. Ment. Dev. 6(2), 110–125 (2014)

    Article  Google Scholar 

  10. Hadizadeh, H., Bajić, I.V.: Saliency-aware video compression. IEEE Trans. Image Process. 23(1), 19–33 (2013)

    Article  MathSciNet  Google Scholar 

  11. Wang, W., Shen, J., Guo, F., Cheng, M.-M., Borji, A.: Revisiting video saliency: a large-scale benchmark and a new model. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4894–4903 (2018)

    Google Scholar 

  12. Chen, J., Li, K., Deng, Q., Li, K., Philip, S.Y.: Distributed deep learning model for intelligent video surveillance systems with edge computing. IEEE Trans. Ind. Inf. (2019)

    Google Scholar 

  13. Hung, C.-C., et al.: Processing camera streams using hierarchical clusters. In: 2018 IEEE/ACM Symposium on Edge Computing (SEC), pp. 115–131 (2018)

    Google Scholar 

  14. Liu, Yu., Yang, F., Ginhac, D.: ACDnet: an action detection network for real-time edge computing based on flow-guided feature approximation and memory aggregation. Pattern Recogn. Lett. 145, 118–126 (2021)

    Article  Google Scholar 

  15. He, S., Lau, R.W.H., Liu, W., et al.: SuperCNN: a superpixelwise convolutional neural network for salient object detection. Int. J. Comput. Vis. 115, 330–344 (2015). https://doi.org/10.1007/s11263-015-0822-0

    Article  MathSciNet  Google Scholar 

  16. Hou, Q., Cheng, M.-M., Hu, X., Borji, A., Tu, Z., Torr, P.H.S.: Deeply supervised salient object detection with short connections. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3203–3212 (2017)

    Google Scholar 

  17. Min, K., Corso, J.J.: TASED-Net: temporally-aggregating spatial encoder-decoder network for video saliency detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2394–2403 (2019)

    Google Scholar 

  18. Han, K., Wang, Y., Tian, Q., Guo, J., Xu, C., Xu, C.: GhostNet: more features from cheap operations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1580–1589 (2020)

    Google Scholar 

  19. Soomro, K., Zamir, A.R.: Action recognition in realistic sports videos. In: Moeslund, T.B., Thomas, G., Hilton, A. (eds.) Computer Vision in Sports. ACVPR, pp. 181–208. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-09396-3_9

    Chapter  Google Scholar 

  20. Jiang, M., Huang, S., Duan, J., Zhao, Q.: SALICON: saliency in context. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1072–1080 (2015)

    Google Scholar 

  21. Jiang, L., Xu, M., Wang, Z.: Predicting video saliency with object-to-motion CNN and two-layer convolutional LSTM. arXiv preprint arXiv:1709.06316 (2017)

  22. Pan, J., et al.: SalGAN: visual saliency prediction with generative adversarial networks. arXiv preprint arXiv:1701.01081 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, S., Wang, Y., Wang, T., Jia, W., Xie, R. (2022). Self-adapted Frame Selection Module: Refine the Input Strategy for Video Saliency Detection. In: Lai, Y., Wang, T., Jiang, M., Xu, G., Liang, W., Castiglione, A. (eds) Algorithms and Architectures for Parallel Processing. ICA3PP 2021. Lecture Notes in Computer Science(), vol 13156. Springer, Cham. https://doi.org/10.1007/978-3-030-95388-1_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-95388-1_33

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-95387-4

  • Online ISBN: 978-3-030-95388-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics