Skip to main content
Log in

Dual-mode detection for foreground segmentation in low-contrast video images

  • Original Research Paper
  • Published:
Journal of Real-Time Image Processing Aims and scope Submit manuscript

Abstract

In video surveillance, the detection of foreground objects in an image sequence from a still camera is critical for object tracking, activity recognition, and behavior understanding. The widely used background updating models such as single Gaussian and mixture of Gaussians are based mainly on the mean gray level of a given observation period, which could be inevitably affected by outliers and noise in the images. The mean from the average of background pixel values and foreground pixel values of consecutive image frames in an observed period will not represent the true background in the scene. Thus, the mean background models cannot detect low-contrast objects and promptly respond to sudden light changes. In this paper, a dual-mode scheme for foreground segmentation is proposed. The mode is based on the most frequently occurring gray level of observed consecutive image frames, and is used to represent the background in the scene. In order to accommodate the dynamic changes of a background, the proposed method uses a dual-mode model for background representation. The dual-mode model can represent two main states of the background and detect a more complete silhouette of the foreground object in the dynamic background. The proposed method can promptly calculate the exact gray level mode of individual pixels in image sequences by simply dropping the last image frame and adding the current image in an observed period. The comparative evaluation of foreground segmentation methods is performed on the Microsoft’s Wallflower dataset. The results show that the proposed method can quickly respond to illumination changes and well extract foreground objects in a low-contrast background.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Bouwmans, T.: Subspace learning for background modeling: a survey. Recent Pat Comput Sci 2(3), 223–234 (2009)

    Article  Google Scholar 

  2. Bouwmans, T., El Baf, F., Vachon, B.: Statistical background modeling for foreground detection: a survey. In: Handbook of Pattern Recognition and Computer Vision, vol. 4, ch. 3, pp. 181–199. World Scientific Publishing, USA (2010)

  3. Benezeth, Y., Jodoin, P.M., Emile, B., Laurent, H., Rosenberger, C.,: Review and evaluation of commonly-implemented background subtraction algorithms. In: International Conference on Pattern Recognition (ICPR), pp. 1–4, 8–11 (2008)

  4. Piccardi, M.: Achieving real-time object detection and tracking under extreme conditions. Real time Imag Process 1, 33–40 (2006)

    Article  Google Scholar 

  5. Radke, R.J., Andra, S., Al-Kofahi, O., Roysam, B.: Image change detection algorithms: a systematic survey. IEEE Trans. Imag Process 14(3), 294–307 (2005)

    Article  MathSciNet  Google Scholar 

  6. Wren, C.R., Azarbayejani, A., Darrell, T., Pentland, A.P.: Pfinder: real-time tracking of the human body. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 780–785 (1997)

    Article  Google Scholar 

  7. Olson, T, Brill, F.: Moving object detection and event recognition algorithm for smart camera, In: Proc. DARPA Image Understanding Workshop, pp. 159–175 (1997)

  8. Eveland, C., Konolige, K., Bolles, R. C.: Background modeling for segmentation of video-rate stereo sequences. In: Proc. IEEE Conference on Computer Vision and Pattern Recog., pp. 266–271 (1998)

  9. Kanade, T., Collins, R., Lipton, A., Burt, P., Wixson, L.: Advances in cooperative milti-sensor video surveillance. In: Proc. DARPA Image Understanding Workshop, vol. 1, pp. 3–24 (1998)

  10. Cavallaro, A., Ebrahimi, T.: Video object extraction based n adaptive background and statistical change detection. In: Proc. SPIE Visual Communications and Image Processing, pp. 465–475 (2001)

  11. Stauffer, C., Grimson, W.E.L.: Adaptive background mixture models for real-time tracking. In: Proc. IEEE Conference on Computer Vision and Pattern Recog., vol. 2, pp. 246–252 (1999)

  12. Stauffer, C., Grimson, W.E.L.: Learning patterns of activity using real-time tracking. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 747–757 (2000)

    Article  Google Scholar 

  13. Allili M., Bouguila N, Ziou D.: A robust video foreground segmentation by using generalized Gaussian mixture modeling. In: Fourth Canadian Conf on computer and robot vision, pp. 503–509 (2007)

  14. Elgammal, A., Duraiswami, R., Harwood, D., Davis, L. S.: Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. In: Proc. IEEE, vol. 90, pp. 1151–1163 (2002)

  15. Elgammal, A., Duraiswami, R., Davis, L.: Efficient kernel density estimation using the Fast Gauss Transform with applications to color modeling and tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(11), 1499–1504 (2003)

    Article  Google Scholar 

  16. Ianasi, C., Gui, V., Toma, C.I., Pescaru, D.: A fast algorithm for background tracking in video surveillance using nonparametric kernel density estimation, Facta Universitatis Ser.: Elec. Energ. 18(1), 127–144 (2005)

    Google Scholar 

  17. Kim, K., Chalidabhongse, T.H., Harwood, D., Davis, L.: Real-time foreground-background segmentation using codebook model. Real Time Imag 11(3), 172–185 (2005)

    Article  Google Scholar 

  18. Zhao, X., Satoh, Y., Takauji, H., Kaneko, S., Iwata, K., Ozaki, R.: Object detection based on a robust and accurate statistical multi-point-pair model. Pattern Recogn. 44(6), 1296–1311 (2011)

    Article  Google Scholar 

  19. Toyama K, Krumm J, Brumitt B, Meyers B. Wallflower: Principles and practice of background maintenance. In: Inter Conf on Computer Vision 1991, pp. 255–261, Corfu, Greece, September 1999

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Du-Ming Tsai.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Chiu, WY., Tsai, DM. Dual-mode detection for foreground segmentation in low-contrast video images. J Real-Time Image Proc 9, 647–659 (2014). https://doi.org/10.1007/s11554-011-0240-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11554-011-0240-7

Keywords

Navigation