Skip to main content

Advertisement

Log in

A foveated vision framework for visual change detection using motion and textural features

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Change detection is an important process in many video-based applications such as anomaly event detection and video surveillance. This paper proposes a foveated vision framework that simulates the human visual system for change detection. It contains two phases—first identifying regions with visual changes due to significant motion, and then, the extraction of detailed information of the change. In phase I, change proposals (CPs) and background are segregated by analyzing the intensity and motion features. In phase II, visual changes are estimated from the CPs by analyzing the photometric and textural features. Each phase of analysis has a unique pre-generated archetype. A probabilistic refinement scheme is used to rectify the labeling of background and change. In each phase of analysis, the result is used to update the archetype immediately. Some well-known and recently proposed background modeling/subtraction algorithms are selected for our comparative study. Experimentations are performed on various video datasets. In some videos, our method can achieve higher accuracy than some recently proposed methods by 30%. In the large-scale experimentation using all the testing videos, our method can achieve higher average accuracy than the second best method by more than 3%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Hsieh, J.-W., Hsu, Y.-T., Liao, H.-Y.M., Chen, C.-C.: Video-based human movement analysis and its application to surveillance systems. IEEE Trans. Multimed. 10(3), 372–384 (2008)

    Article  Google Scholar 

  2. Lu, W., Tan, Y.-P.: A vision-based approach to early detection of drowning incidents in swimming pools. IEEE Trans. Circuits Syst. Video Technol. 14(2), 159–178 (2004)

    Article  Google Scholar 

  3. Visser, R., Sebe, N., Bakker, E.: Object recognition for video retrieval. In: Proceedings of International Conference on Image and Video Retrieval, pp. 250–259 (2002)

  4. Sobral, A., Vacavant, A.: A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput. Vis. Image Underst. 122, 4–21 (2014)

    Article  Google Scholar 

  5. Stauffer, C., Grimson, W.E.L.: Learning patterns of activity using real-time tracking. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 747–757 (2000)

    Article  Google Scholar 

  6. Elgammal, A., Duraiswami, R., Harwood, D., Davis, L.S.: Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. Proc. IEEE 90(7), 1151–1163 (2002)

    Article  Google Scholar 

  7. Barnich, O., Van Droogenbroeck, M.: ViBe: a powerful random technique to estimate the background in video sequences. In: Proceedings of International Conference on Acoustics, Speech and Signal Processing, pp. 945–948 (2009)

  8. Heikkilä, M., Pietikäinen, M.: A texture-based method for modeling the background and detecting moving objects. IEEE Trans. Pattern Anal. Mach. Intell. 28(4), 657–662 (2006)

    Article  Google Scholar 

  9. Liao, S., Zhao, G., Kellokumpu, V., Pietikäinen, M., Li, S.Z.: Modeling pixel process with scale invariant local patterns for background subtraction in complex scenes. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1301–1306 (2010)

  10. Nakahata, M.T., Thomaz, L.A., da Silva, A.F., da Silva, E.A.B., Netto, S.L.: Anomaly detection with a moving camera using spatio-temporal codebooks. Multidimens. Syst. Signal Process. 29, 1025–1054 (2018)

    Article  MathSciNet  Google Scholar 

  11. Van Droogenbroeck, M., Paquot, O.: Background subtraction: experiments and improvements for ViBe. In: Proceedings of IEEE Workshop Change Detection at IEEE Conference Computer Vision and Pattern Recognition, pp. 32–37 (2012)

  12. Kim, S.W., Yun, K., Yi, K.M., Kim, S.J., Choi, J.Y.: Detection of moving objects with a moving camera using non-panoramic background model. Mach. Vis. Appl. 24, 1015–1028 (2013)

    Article  Google Scholar 

  13. Braham, M., Van Droogenbroeck, M.: Deep background subtraction with scene-specific convolutional neural networks. In: Proceedings of International Conference on Systems, Signals Image Processing (2016)

  14. Zhan, Y., Fu, K., Yan, M., Sun, X., Wang, H., Qiu, X.: Change detection based on deep Siamese convolutional network for optical aerial images. IEEE Geosci. Remote Sens. Lett. 14(10), 1845–1849 (2017)

    Article  Google Scholar 

  15. Wang, Y., Luo, Z., Jodoin, P.-M.: Interactive deep learning method for segmenting moving objects. Pattern Recognit. Lett. 96, 66–75 (2017)

    Article  Google Scholar 

  16. Lim, L.A., Keles, H.Y.: Foreground segmentation using a triplet convolutional neural network for multiscale feature encoding. arXiv:1801.02225 [cs.CV] (2018)

  17. Babaee, M., Dinh, D.T., Rigoll, G.: A deep convolutional neural network for video sequence background subtraction. Pattern Recognit. 76, 635–649 (2018)

    Article  Google Scholar 

  18. Chan, K.L.: Segmentation of foreground in image sequence with foveated vision concept. In: Proceedings of 5th Asian Conference on Pattern Recognition (2019)

  19. Goyette, N., Jodoin, P.-M., Porikli, F., Konrad, J., Ishwar, P.: Changedetection.net: a new change detection benchmark dataset. In: Proceedings of IEEE Workshop on Change Detection at IEEE Conference on Computer Vision and Pattern Recognition, pp. 16–21 (2012)

  20. Toyama, K., Krumm, J., Brumitt, B., Meyers, B.: Wallflower: principles and practice of background maintenance. In: Proceedings of International Conference on Computer Vision, pp. 255–261 (1999)

  21. Li, L., Huang, W., Gu, I.Y.-H., Tian, Q.: Statistical modelling of complex backgrounds for foreground object detection. IEEE Trans. Image Process. 13(11), 1459–1472 (2004)

    Article  Google Scholar 

  22. El Kah, S., Aqela, S., Sabrib, M.A., Aarab, A.: Background modeling method based on quad tree decomposition and contrast measure. Procedia Comput. Sci. 148, 610–617 (2019)

    Article  Google Scholar 

  23. Kaushal, M., Khehra, B.S.: Performance evaluation of fuzzy 2-partition entropy and big bang big crunch optimization based object detection and tracking approach. Multidimens. Syst. Signal Process. 29, 1579–1611 (2019)

    Article  Google Scholar 

  24. Heikkilä, M., Pietikäinen, M., Heikkilä, J.: A texture-based method for detecting moving objects. In: Proceedings of British Machine Vision Conference, pp. 187–196 (2004)

  25. Ma, F., Sang, N.: Background subtraction based on multi-channel SILTP. In: Proceedings of Asian Conference on Computer Vision, pp. 73–84 (2012)

  26. Chan, K.L.: Saliency detection in video sequences using perceivable change encoded local pattern. Signal Image Video Process. 12(5), 975–982 (2018)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kwok-Leung Chan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chan, KL. A foveated vision framework for visual change detection using motion and textural features. SIViP 15, 987–994 (2021). https://doi.org/10.1007/s11760-020-01823-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-020-01823-z

Keywords

Navigation