Skip to main content
Log in

Dynamic background subtraction via sparse representation of dynamic textures in a low-dimensional subspace

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

In this paper, we deal with the problem of background subtraction especially for the scenes containing dynamic textures. In the scenes, unlike static textures, dynamic textures show a wide range of per-pixel color variations over time. For successful dynamic background subtraction, therefore, it is an essential task to represent the dynamics of these variations effectively. For this task, in the proposed method, i) a training set of dynamic background scenes is modeled in a low-dimensional subspace and then ii) the background of a test scene is represented as a linear combination of a few coefficient matrices resulting from the projection of the training scenes onto the low-dimensional subspace. More specifically, the proposed dynamic background subtraction method is based on the sparse representation of dynamic textures in the low-dimensional subspace. In the experiments, the proposed method shows promising performance in comparison with other competitive methods in the literature.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Stauffer, C., Grimson, W.E.L.: Adaptive background mixture models for real-time tracking. In: Proceedings of Conference on Computer Vision and Pattern Recognition (1999)

  2. Zivkovic, Z.: Improved adaptive Gaussian mixture model for background subtraction. In: Proceedings of International Conference on Pattern Recognition (2004)

  3. Wang, H., Suter, D.: A re-evaluation of mixture of Gaussian background. In: International Conference on Acoustics, Speech, and Signal Processing (2005)

  4. Chan, A.B., Mahadevan, V., Vasconcelos, N.: Generalized Stauffer–Grimson background subtraction for dynamic scenes. Mach. Vis. Appl. 22(5), 751–766 (2011)

  5. Monnet, A., Mittal, A., Paragios, N., Ramesh, V.: Background modeling and subtraction of dynamic scenes. In: Proceedings of International Conference on Computer Vision (2003)

  6. Zhong, J., Sclaroff, S.: Segmenting foreground objects from a dynamic textured background via a robust Kalman filter. In: Proceedings of International Conference on Computer Vision (2003)

  7. Sheikh, Y., Shah, M.: Bayesian modeling of dynamic scenes for object detection. IEEE Trans. Pattern Anal. Mach. Intell. 27(11), 1778–1792 (2005)

    Article  Google Scholar 

  8. Zhang, S., Yao, H., Liu, S.: Dynamic background modeling and subtraction using spatio-temporal local binary patterns. In: Proceedings of International Conference on Image Processing (2008)

  9. Kim, W., Kim, C.: Background subtraction for dynamic texture scenes using fuzzy color histograms. IEEE Signal Process. Lett. 19(3), 127–130 (2012)

    Article  Google Scholar 

  10. Huang, J., Huang, X., Metaxas, D.: Learning with dynamic group sparsity. In: Proceedings of International Conference on Computer Vision (2009)

  11. Chen, C., Peng, Z., Huang, J.: O(1) algorithms for overlapping group sparsity. In: Proceedings of International Conference on Pattern Recognition (2014)

  12. Donoho, D.L.: For most large underdetermined systems of linear equations the minimal \(l_1\)-Norm solution is also the sparsest solution. Commun. Pure Appl. Math. 59(6), 797–829 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  13. Kim, K., Chalidabhongse, T.H., Harwood, D., Davis, L.: Real-time foreground background segmentation using codebook model. Real-time Imaging 11(3), 172–185 (2005)

    Article  Google Scholar 

  14. Yoo, S., Kim, C.: Background subtraction using hybrid feature coding in the bag-of-features framework. Pattern Recognit. Lett. 34(16), 2086–2093 (2013)

    Article  Google Scholar 

  15. Guo, J.-M., Hsia, C.-H., Liu, Y.-F., Shih, M.-H., Chang, C.-H., Wu, J.-Y.: Fast background subtraction based on a multi-layer codebook model for moving object detection. IEEE Trans. Circuits Syst. Video Technol. 23(10), 1809–1821 (2013)

    Article  Google Scholar 

  16. Zhang, D., Zhou, Z.-H.: \((2D)^2\)PCA: two-directional two-dimensional PCA for efficient face representation and recognition. Neurocomputing 69(1), 224–231 (2005)

    Article  Google Scholar 

  17. Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009)

    Article  Google Scholar 

  18. Goyette, N., Jodoin, P., Porikli, F., Konrad, J., Ishwar, P.: Changedetection.net: a new change detection benchmark dataset. In: Proceedings of Conference on Computer Vision and Pattern Recognition Workshops (2012)

  19. Li, L., Huang, W., Gu, I.H., Tian, Q.: Statistical modeling of complex backgrounds for foreground object detection. IEEE Trans. Image Process. 13(11), 1459–1472 (2004)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ja-Won Seo.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Seo, JW., Kim, S.D. Dynamic background subtraction via sparse representation of dynamic textures in a low-dimensional subspace. SIViP 10, 29–36 (2016). https://doi.org/10.1007/s11760-014-0697-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-014-0697-5

Keywords

Navigation