Skip to main content
Log in

A temporal-spatial background modeling of dynamic scenes

  • Research Article
  • Published:
Frontiers of Computer Science in China Aims and scope Submit manuscript

Abstract

Moving object detection in dynamic scenes is a basic task in a surveillance system for sensor data collection. In this paper, we present a powerful background subtraction algorithm called Gaussian-kernel density estimator (G-KDE) that improves the accuracy and reduces the computational load. The main innovation is that we divide the changes of background into continuous and stable changes to deal with dynamic scenes and moving objects that first merge into the background, and separately model background using both KDE model and Gaussian models. To get a temporal-spatial background model, the sample selection is based on the concept of region average at the update stage. In the detection stage, neighborhood information content (NIC) is implemented which suppresses the false detection due to small and un-modeled movements in the scene. The experimental results which are generated on three separate sequences indicate that this method is well suited for precise detection of moving objects in complex scenes and it can be efficiently used in various detection systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Moeslund T B, Hilton A, Krüger V. A survey of advances in visionbased human motion capture and analysis. Computer Vision and Image Understanding, 2006, 104(2–3): 90–126

    Article  Google Scholar 

  2. Stauttfer C, Grimson WE L. Adaptive background mixture models for real-time tracking. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 1999, 246–252

  3. Sun Y, Yuan B. Hierarchical GMM to handle sharp changes in moving object detection. Electronics Letters, 2004, 40(13): 801–802

    Article  Google Scholar 

  4. KaewTraKulPong P, Bowden R. An improved adaptive background mixture model for real-time tracking with shadow detection. In: Proceedings of 2nd European Workshop on Advanced Video Based Surveillance System. 2001, 1–5

  5. Zivkovic Z. Improved adaptive Gaussian mixture model for background subtraction. In: Proceedings of the 17th International Conference on Pattern Recognition. 2004, 28–31

  6. Tuzel O, Porikli F, Meer P. A Bayesian approach to background modeling. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition-Workshops. 2005, 58–63

  7. Han B, Comaniciu D, Davis L. Sequential kernel density approximation through mode propagation: application to background modeling. In: Proceedings of Asian Conference on Computer Version. 2004, 818–823

  8. Klare B, Sarkar S. Background subtraction in varying illuminations using an ensemble based on an enlarged feature set. In: Proceedings of Workshop on Computer Vision and Pattern Recognition. 2009, 66–73

  9. Elgammal A, Harwood D, Davis L. Non-parametric model for background subtraction. In: Proceedings of the 6th European Conference on Computer Version. 2000, 751–767

  10. Sheikh Y, Shah M. Bayesian modeling of dynamic scenes for object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(11): 1778–1792

    Article  Google Scholar 

  11. Mittal A, Paragios N. Motion-based background subtraction using adaptive kernel density estimation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2004, 302–309

  12. Cutler R, Davis L. View-based detection and analysis of periodic motion. In: Proceedings 14th International Conference on Pattern Recognition. 1998, 495–500

  13. Haritaoglu I, Hrwood D, Davis L S. W4: real-time surveillance of people and their activities. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(8): 809–830

    Article  Google Scholar 

  14. Heikkila M, Pietikainen M. A texture-based method for modeling the background and detecting moving objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(4): 657–662

    Article  Google Scholar 

  15. Yao J, Odobez J M. Multi-layer background subtraction based on color and texture. In: Proceedings of Computer Vision and Pattern Recognition. 2007, 1–8

  16. Cheung S C S, Kamath C. Robust techniques for background subtraction in urban traffic video. In: Proceedings of the International Society for Optical Engineering. 2004, 881–892

  17. McFarlane N, Schofield C. Segmentation and tracking of piglets in images. Machine Vision and Applications, 1995, 8(3): 187–193

    Article  Google Scholar 

  18. Wren C R, Azabayejani A, Darrel T, Pentland A. Pfinder: real-time tracking of the human body. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997, 19(7): 780–785

    Article  Google Scholar 

  19. Oliver N, Rosario B, Pentland A. A Bayesian computer vision system for modeling human interactions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(8): 831–843

    Article  Google Scholar 

  20. Eng H L, Wang J, Wah A H K S, Yau WY. Robust human detection within a highly dynamic aquatic environment in real time. IEEE Transactions on Image Processing, 2006, 15(6): 1583–1600

    Article  Google Scholar 

  21. Cristani M, Murino V. A spatial sampling mechanism for effective background subtraction. In: Proceedings of 2nd International Conference on Computer Vision Theory and Applications. 2007, 403–412

  22. Barnich O, Droogenbroeck M V. Vibe: a powerful random technique to estimate the background in video sequences. In: Proceedings of the 2009 Acoustics, Speech and Signal Processing. 2009, 945–948

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiuyue Hao.

Additional information

Hao Jiuyue was born in 1984. She received the bachelor degree in computer science from Communication University of China, Beijing in 2006. She is currently pursuing her PhD degree in computer science and technology at Beihang University, Beijing, and visiting University of California, Berkeley for one year. Her research interests include computer vision, pervasive computing and intelligent transportation systems.

Li Chao received his BSc and PhD degrees in computer science and technology from Beihang University, Beijing, China in 1996 and 2005. Now he is associate professor and master supervisor in the School of Computer Science and Engineering, Beihang University. Currently, he is working on data vitalization and computer vision.

Xiong Zhang received his Bachelor degree from Harbin Engineering University, Heilongjiang Province, China in 1982, received the his MSc degree from Beihang University, Beijing in 1985. He is a professor and PhD supervisor in the School of Computer Science and Engineering, Beihang University. He is working on computer vision, wireless sensor networks and information security.

Ejaz Hussain received his Bachelor degree from UET Lahore, Pakistan in 1998 and Master degree from UET Taxila, Pakistan in 2006. Currently, he is pursuing his PhD degree in computer science and engineering in Beihang University, Beijing. His research interest includes Ad hoc sensor networks, pervasive computing and adaptive vision.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hao, J., Li, C., Xiong, Z. et al. A temporal-spatial background modeling of dynamic scenes. Front. Comput. Sci. China 5, 290–299 (2011). https://doi.org/10.1007/s11704-011-0377-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11704-011-0377-3

Keywords

Navigation