Abstract
In video surveillance, the detection of foreground objects in an image sequence from a still camera is critical for object tracking, activity recognition, and behavior understanding. The widely used background updating models such as single Gaussian and mixture of Gaussians are based mainly on the mean gray level of a given observation period, which could be inevitably affected by outliers and noise in the images. The mean from the average of background pixel values and foreground pixel values of consecutive image frames in an observed period will not represent the true background in the scene. Thus, the mean background models cannot detect low-contrast objects and promptly respond to sudden light changes. In this paper, a dual-mode scheme for foreground segmentation is proposed. The mode is based on the most frequently occurring gray level of observed consecutive image frames, and is used to represent the background in the scene. In order to accommodate the dynamic changes of a background, the proposed method uses a dual-mode model for background representation. The dual-mode model can represent two main states of the background and detect a more complete silhouette of the foreground object in the dynamic background. The proposed method can promptly calculate the exact gray level mode of individual pixels in image sequences by simply dropping the last image frame and adding the current image in an observed period. The comparative evaluation of foreground segmentation methods is performed on the Microsoft’s Wallflower dataset. The results show that the proposed method can quickly respond to illumination changes and well extract foreground objects in a low-contrast background.
Similar content being viewed by others
References
Bouwmans, T.: Subspace learning for background modeling: a survey. Recent Pat Comput Sci 2(3), 223–234 (2009)
Bouwmans, T., El Baf, F., Vachon, B.: Statistical background modeling for foreground detection: a survey. In: Handbook of Pattern Recognition and Computer Vision, vol. 4, ch. 3, pp. 181–199. World Scientific Publishing, USA (2010)
Benezeth, Y., Jodoin, P.M., Emile, B., Laurent, H., Rosenberger, C.,: Review and evaluation of commonly-implemented background subtraction algorithms. In: International Conference on Pattern Recognition (ICPR), pp. 1–4, 8–11 (2008)
Piccardi, M.: Achieving real-time object detection and tracking under extreme conditions. Real time Imag Process 1, 33–40 (2006)
Radke, R.J., Andra, S., Al-Kofahi, O., Roysam, B.: Image change detection algorithms: a systematic survey. IEEE Trans. Imag Process 14(3), 294–307 (2005)
Wren, C.R., Azarbayejani, A., Darrell, T., Pentland, A.P.: Pfinder: real-time tracking of the human body. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 780–785 (1997)
Olson, T, Brill, F.: Moving object detection and event recognition algorithm for smart camera, In: Proc. DARPA Image Understanding Workshop, pp. 159–175 (1997)
Eveland, C., Konolige, K., Bolles, R. C.: Background modeling for segmentation of video-rate stereo sequences. In: Proc. IEEE Conference on Computer Vision and Pattern Recog., pp. 266–271 (1998)
Kanade, T., Collins, R., Lipton, A., Burt, P., Wixson, L.: Advances in cooperative milti-sensor video surveillance. In: Proc. DARPA Image Understanding Workshop, vol. 1, pp. 3–24 (1998)
Cavallaro, A., Ebrahimi, T.: Video object extraction based n adaptive background and statistical change detection. In: Proc. SPIE Visual Communications and Image Processing, pp. 465–475 (2001)
Stauffer, C., Grimson, W.E.L.: Adaptive background mixture models for real-time tracking. In: Proc. IEEE Conference on Computer Vision and Pattern Recog., vol. 2, pp. 246–252 (1999)
Stauffer, C., Grimson, W.E.L.: Learning patterns of activity using real-time tracking. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 747–757 (2000)
Allili M., Bouguila N, Ziou D.: A robust video foreground segmentation by using generalized Gaussian mixture modeling. In: Fourth Canadian Conf on computer and robot vision, pp. 503–509 (2007)
Elgammal, A., Duraiswami, R., Harwood, D., Davis, L. S.: Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. In: Proc. IEEE, vol. 90, pp. 1151–1163 (2002)
Elgammal, A., Duraiswami, R., Davis, L.: Efficient kernel density estimation using the Fast Gauss Transform with applications to color modeling and tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(11), 1499–1504 (2003)
Ianasi, C., Gui, V., Toma, C.I., Pescaru, D.: A fast algorithm for background tracking in video surveillance using nonparametric kernel density estimation, Facta Universitatis Ser.: Elec. Energ. 18(1), 127–144 (2005)
Kim, K., Chalidabhongse, T.H., Harwood, D., Davis, L.: Real-time foreground-background segmentation using codebook model. Real Time Imag 11(3), 172–185 (2005)
Zhao, X., Satoh, Y., Takauji, H., Kaneko, S., Iwata, K., Ozaki, R.: Object detection based on a robust and accurate statistical multi-point-pair model. Pattern Recogn. 44(6), 1296–1311 (2011)
Toyama K, Krumm J, Brumitt B, Meyers B. Wallflower: Principles and practice of background maintenance. In: Inter Conf on Computer Vision 1991, pp. 255–261, Corfu, Greece, September 1999
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Chiu, WY., Tsai, DM. Dual-mode detection for foreground segmentation in low-contrast video images. J Real-Time Image Proc 9, 647–659 (2014). https://doi.org/10.1007/s11554-011-0240-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11554-011-0240-7