Skip to main content

A Novel Foreground Segmentation Method Using Convolutional Neural Network

  • Conference paper
  • First Online:
Recent Trends in Image Processing and Pattern Recognition (RTIP2R 2018)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1035))

Abstract

Background subtraction is a commonly used approach for foreground segmentation (moving object detection). Different methods have been proposed based on this background subtraction technique. However, the algorithms give the false alarm in case of complex scenarios such as dynamic background, camera motion, shadow, illumination variation, camouflage, etc. A foreground segmentation system using convolutional neural network framework is proposed in this paper to handle these complex scenarios. In this approach, the non-handcrafted features learned from the deep neural network are used for the detection of moving objects. These non-handcrafted features are robust and efficient compared to the handcrafted features. The presented method is learned using spatial and temporal information. Additionally, a new background model is proposed to estimate the temporal information. We train the model end-to-end using input images, background images, and optical flow images. For the training purpose, we have randomly selected few images and its ground truth images from CDnet 2014. The proposed method is evaluated with benchmark datasets, and it outperforms the state-of-the-art methods in terms of qualitative and quantitative analyzes. The proposed model is capable of real-time processing because of its network architecture. Hence the model can be used in real-surveillance applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Allebosch, G., Deboeverie, F., Veelaert, P., Philips, W.: EFIC: edge based foreground background segmentation and interior classification for dynamic camera viewpoints. In: Battiato, S., Blanc-Talon, J., Gallo, G., Philips, W., Popescu, D., Scheunders, P. (eds.) ACIVS 2015. LNCS, vol. 9386, pp. 130–141. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-25903-1_12

    Chapter  Google Scholar 

  2. Babaee, M., Dinh, D.T., Rigoll, G.: A deep convolutional neural network for video sequence background subtraction. Pattern Recogn. 76, 635–649 (2018)

    Article  Google Scholar 

  3. Barnich, O., Van Droogenbroeck, M.: ViBe: a universal background subtraction algorithm for video sequences. IEEE Trans. Image Process. 20(6), 1709–1724 (2011)

    Article  MathSciNet  Google Scholar 

  4. Braham, M., Van Droogenbroeck, M.: Deep background subtraction with scene-specific convolutional neural networks. In: 2016 International Conference on Systems, Signals and Image Processing (IWSSIP), pp. 1–4. IEEE (2016)

    Google Scholar 

  5. Chen, Y., Wang, J., Lu, H.: Learning sharable models for robust background subtraction. In: 2015 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2015)

    Google Scholar 

  6. Chen, Z., Ellis, T.: A self-adaptive Gaussian mixture model. Comput. Vis. Image Underst. 122, 35–46 (2014)

    Article  Google Scholar 

  7. De Gregorio, M., Giordano, M.: WiSARDrp for change detection in video sequences. In: CVPR 2016, Google Scholar (2016, submitted)

    Google Scholar 

  8. Elgammal, A., Harwood, D., Davis, L.: Non-parametric model for background subtraction. In: Vernon, D. (ed.) ECCV 2000. LNCS, vol. 1843, pp. 751–767. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-45053-X_48

    Chapter  Google Scholar 

  9. Hofmann, M., Tiefenbacher, P., Rigoll, G.: Background segmentation with feedback: the pixel-based adaptive segmenter. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 38–43. IEEE (2012)

    Google Scholar 

  10. Jagtap, A.B., Hegadi, R.S.: Feature learning for offline handwritten signature verification using convolution neural networks. Int. J. Technol. Hum. Interact. (IJTHI) 70, 163–176 (2017)

    Google Scholar 

  11. Jain, S.D., Xiong, B., Grauman, K.: FusionSeg: learning to combine motion and appearance for fully automatic segmentation of generic objects in videos. 2(3), 6 (2017). arXiv preprint arXiv:1701.05384

  12. Jiang, S., Lu, X.: WeSamBE: a weight-sample-based method for background subtraction. IEEE Trans. Circ. Syst. Video Technol. 28, 2105–2115 (2017)

    Article  Google Scholar 

  13. Kim, K., Chalidabhongse, T.H., Harwood, D., Davis, L.: Real-time foreground-background segmentation using codebook model. Real-Time Imaging 11(3), 172–185 (2005)

    Article  Google Scholar 

  14. Martins, I., Carvalho, P., Corte-Real, L., Alba-Castro, J.L.: BMOG: boosted Gaussian mixture model with controlled complexity. In: Alexandre, L.A., Salvador Sánchez, J., Rodrigues, J.M.F. (eds.) IbPRIA 2017. LNCS, vol. 10255, pp. 50–57. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58838-4_6

    Chapter  Google Scholar 

  15. Oliver, N.M., Rosario, B., Pentland, A.P.: A Bayesian computer vision system for modeling human interactions. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 831–843 (2000)

    Article  Google Scholar 

  16. Pathak, D., Girshick, R., Dollár, P., Darrell, T., Hariharan, B.: Learning features by watching objects move. In: Computer Vision and Pattern Recognition (CVPR) (2017)

    Google Scholar 

  17. Ramírez-Alonso, G., Chacón-Murguía, M.I.: Auto-adaptive parallel som architecture with a modular analysis for dynamic object segmentation in videos. Neurocomputing 175, 990–1000 (2016)

    Article  Google Scholar 

  18. Sajid, H., Cheung, S.C.S.: Background subtraction for static & moving camera. In: 2015 IEEE International Conference on Image Processing (ICIP), pp. 4530–4534. IEEE (2015)

    Google Scholar 

  19. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: A self-adjusting approach to change detection based on background word consensus. In: 2015 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 990–997. IEEE (2015)

    Google Scholar 

  20. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: SuBSENSE: a universal change detection method with local adaptive sensitivity. IEEE Trans. Image Process. 24(1), 359–373 (2015)

    Article  MathSciNet  Google Scholar 

  21. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: Universal background subtraction using word consensus models. IEEE Trans. Image Process. 25(10), 4768–4781 (2016)

    Article  MathSciNet  Google Scholar 

  22. Stauffer, C., Grimson, W.E.L.: Adaptive background mixture models for real-time tracking. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 246–252. IEEE (1999)

    Google Scholar 

  23. Varghese, A., Sreelekha, G.: Sample-based integrated background subtraction and shadow detection. IPSJ Trans. Comput. Vis. Appl. 9(1), 25 (2017)

    Article  Google Scholar 

  24. Vijayan, M., Ramasundaram, M.: Moving object detection using vector image model. Optik 168, 963–973 (2018)

    Article  Google Scholar 

  25. Wang, K., Gou, C., Wang, F.Y.: M4CD: a robust change detection method for intelligent visual surveillance. IEEE Access 6, 15505–15520 (2018)

    Article  Google Scholar 

  26. Wang, Y., Jodoin, P.M., Porikli, F., Konrad, J., Benezeth, Y., Ishwar, P.: CDNET 2014: an expanded change detection benchmark dataset. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 387–394 (2014)

    Google Scholar 

  27. Wang, Y., Luo, Z., Jodoin, P.M.: Interactive deep learning method for segmenting moving objects. Pattern Recogn. Lett. 96, 66–75 (2017)

    Article  Google Scholar 

  28. Yang, L., Li, J., Luo, Y., Zhao, Y., Cheng, H., Li, J.: Deep background modeling using fully convolutional network. IEEE Trans. Intell. Transp. Syst. 19(1), 254–262 (2018)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Midhula Vijayan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vijayan, M., Mohan, R. (2019). A Novel Foreground Segmentation Method Using Convolutional Neural Network. In: Santosh, K., Hegadi, R. (eds) Recent Trends in Image Processing and Pattern Recognition. RTIP2R 2018. Communications in Computer and Information Science, vol 1035. Springer, Singapore. https://doi.org/10.1007/978-981-13-9181-1_3

Download citation

  • DOI: https://doi.org/10.1007/978-981-13-9181-1_3

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-13-9180-4

  • Online ISBN: 978-981-13-9181-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics