Skip to main content
Log in

Adaptive background modeling of complex scenarios based on pixel level learning modeled with a retinotopic self-organizing map and radial basis mapping

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Background modeling in video sequences is a prominent topic which generates very relevant works regarding models, algorithms, and databases. Its importance is related to real world applications like, video segmentation, surveillance, Internet of things (IoT), privacy and video compression. This paper proposes an adaptive background modeling method termed Radial Basis RESOM (RB-SOM) in order to contribute in this field. RB-SOM is able to deal with video sequences involving complex scenarios. The model assigns to each pixel an adaptive learning rate, which is determined by two schemes: a mapping radial basis function, and a retinotopic self-organizing neural network (RESOM). The radial function defines information to activate the learning of pixels representing the background and inhibit the learning of pixels that correspond to dynamic objects, even when they stay still for a long time. RESOM identifies the gradual illumination changes and improves the separation of background and foreground. The model also performs an entropy analysis and correlation to detect abrupt scenario changes such as sudden illumination changes. Results of the work indicate that the performance of the proposed model is similar to state of the art background modeling algorithms like BEWiS and LaBGen, and compared to motion detection models like Subsense and PAWCS. Then, RB-SOM presents the best processing times, and stabilization of the background when the video sequence presents sudden scenario changes. Besides, RB-SOM has competitive results in videos with dynamic background, jittering, illumination changes, and dynamic objects that remain stationary.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Bouwmans T, Maddalena L, Petrosino A (2017) Scene background initialization: a taxonomy. Pattern Recogn Lett 96(1):1–9

    Google Scholar 

  2. Bowmans T, Silva C, Marghes C, Zitouni MS (2018) On the role and the importance of features for background modeling and foreground detection. Comput Sci Rev 28(1):26–91

    Article  MathSciNet  Google Scholar 

  3. Bouwmans T (2014) Traditional and recent approaches in background modeling for foreground detection: an overview. Comput Sci Rev 11–12(1):31–66

    Article  Google Scholar 

  4. Sobral Vacavant A (2014) A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput Vis Image Underst 122:4–21

    Article  Google Scholar 

  5. Xu Y, Dong J, Zhang B, Xu D (2016) Background modeling methods in video analysis a review and comparative evaluation. CAAI Trans Intell Technol 1(1):3–60

    Google Scholar 

  6. Allili MS, Bouguila N, Ziou D (2007) A robust video foreground segmentation by using generalized Gaussian mixture modeling. In: Fourth Canadian conference on computer and robot vision, IEEE

  7. Haines TSF, Xiang T (2014) Background subtraction with Dirichlet process mixture models. IEEE Trans Pattern Anal Mach Intell 36(4):670–683

    Article  Google Scholar 

  8. Chen Y, Wang J, Lu H (2015) Learning sharable models for robust background subtraction. In: International conference on multimedia, IEEE

  9. Zhang Y, Zhao C, He ACJ (2016) Vehicles detection in complex urban traffic scenes using Gaussian mixture model with confidence measurement. IET Intell Transport Syst 10(6):445–452

    Article  Google Scholar 

  10. Chen M, Wei X, Yang Q, Li Q, Wang G, Yang MH (2017) Spatiotemporal GMM for background subtraction with superpixel hierarchy. IEEE Trans Pattern Anal Mach Intell 99(PP):1–1

    Google Scholar 

  11. Kumar Sureshkumar DC (2013) Background subtraction based on threshold detection using modified k-means algorithm. In: Internaional conference on pattern recognition, informatics and mobile engineering, IEEE

  12. Xiuman D, Guoxia S, Tao Y (2012) Moving target detection based on genetic k-means algorithm. In: Internaional conference on ommunication technology, IEEE

  13. Soeleman MA, Hariadi M, Purnomo MH (2013) Adaptive threshold for background subtraction in moving object detection using fuzzy c-means clustering. In: Conference on TENCON, IEEE

  14. Wu M, Peng X (2010) Spatio-temporal context for codebook-based dynamic background subtraction. AEU-Int J Electron Commun 64(8):739–747

    Article  Google Scholar 

  15. Guo J-M, Hsia C-H, Liu Y-F, Shih M-H, Chang C-H, Wu J-Y (2013) Fast background subtraction based on a multilayer codebook model for moving object detection. IEEE Trans Circ Syst Video Technol 23(10):1809–1821

    Article  Google Scholar 

  16. Bouwmans T (2012) Background subtraction for visual surveillance: a fuzzy approach, vol 5. Taylor and Francis Group

  17. Sivabalakrishnan M, Manjula D (2012) Performance analysis of fuzzy logic-based background subtraction in dynamic environments. Imaging Sci J 60(1):39–46

    Article  Google Scholar 

  18. Calvo-Gallego E, Sánchez-Solano S, Jiménez PB (2015) Hardware implementation of a background substraction algorithm in FPGA-based platforms. In: Internaional conference on industrial technology, IEEE

  19. Mohamad A, Osman M (2013) Adaptive median filter background subtractions technique using fuzzy logic. In: Internaional conference on computing, electrical and electronic engineering, IEEE

  20. Zeng Z, Jia J, Yu D, Chen Y, Zhu Z (2017) Pixel modeling using histograms based on fuzzy partitions for dynamic background subtraction. IEEE Trans Fuzzy Syst 25(3):584–593

    Article  Google Scholar 

  21. Culibrk D, Marques O, Socek D, Kalva H, Furht B (2007) Neural network approach to background modeling for video object segmentation. IEEE Trans Neural Netw 18(6):1614–1627

    Article  Google Scholar 

  22. Goyette N, Jodoin PM, Porikli F, Konrad J, Ishwar P (2012) Changedetection.net: A new change detection Benchmark dataset. In: 2012 IEEE computer society conference on computer vision and pattern recognition workshops, IEEE

  23. Babaeea M, Dinha D, Rigolla G (2017) A deep convolutional neural network for background subtraction. Cornell University Library

  24. Xu P, Ye M, Li X, Liu Q, Yang Y, Ding J (2014) Dynamic background learning through deep auto-encoder networks. In: ACM international conference on multimedia

  25. Gregorioa M, Giordano M (2017) Background estimation by weightless neural networks. Pattern Recogn Lett 96(1):55–65

    Article  Google Scholar 

  26. Wang Y, Qi Y (2013) Memory-based cognitive modeling for robust object extraction and tracking. Appl Intell 39(3):614–629

    Article  Google Scholar 

  27. Maddalena L, Petrosino A (2008) A self-organizing approach to background subtraction for visual surveillance applications. IEEE Trans Image Process 17(7):1168–1177

    Article  MathSciNet  Google Scholar 

  28. Chacon-Murguia MI, Ramirez-Alonso G (2015) Fuzzy-neural self-adapting background modeling with automatic motion analysis for dynamic object detection. Appl Soft Comput 36(1):570–577

    Article  Google Scholar 

  29. Ramirez-Quintana JA, Chacon-Murguia MI (2015) An adaptive unsupervised neural network based on perceptual mechanism for dynamic object detection in videos with real scenarios. Neural Process Lett 42(3):665–689

    Article  Google Scholar 

  30. Ramirez-Quintana JA, Chacon-Murguia MI (2015) Self-adaptive SOM-CNN neural system for dynamic object detection in normal and complex scenarios. Pattern Recogn 48(4):1137–1149

    Article  Google Scholar 

  31. Ramirez-Alonso G, Chacón-murguía MI (2016) Auto-adaptive parallel SOM architecture with a modular analysis for dynamic object segmentation in videos. Neurocomputing 175(B):990–1000

    Article  Google Scholar 

  32. Ramirez-Alonso G, Ramirez-Quintana JA, Chacon-Murguia MI (2017) Temporal weighted learning model for background estimation with an automatic re-initialization stage and adaptive parameters update. Pattern Recogn Lett 96(1):34–44

    Article  Google Scholar 

  33. Nohuddin PNE, Coenen F, Christley R, Setzkorn C, Patel Y, Williams S (2012) Finding “interesting” trends in social networks using frequent pattern mining and self organizing maps. Knowl-Based Syst 29(1):104–113

    Article  Google Scholar 

  34. Abei G, Selamat A, Fujita H (2015) An empirical study based on semi-supervised hybrid self-organizing map for software fault prediction. Knowl-Based Syst 74(1):28–39

    Article  Google Scholar 

  35. St-Charles PL, Bilodeau GA, Bergevin R (2016) Universal background subtraction using word consensus models. IEEE Trans Image Process 25(10):4768–4781

    Article  MathSciNet  Google Scholar 

  36. Toyama K, Krumm J, Brumitt B, Meyers B (1999) Wallflower: principles and practice of background maintenance. In: Internaional conference on computer vision, IEEE

  37. Li L, Huang W, Gu IY-H, Tian Q (2004) Statistical modeling of complex backgrounds for foreground object detection. IEEE Trans Image Process 13(11):1459–1472

    Article  Google Scholar 

  38. Vacavant Chateau T, Wilhelm A, Lequièvre L (2012) A Benchmark dataset for outdoor foreground/background extraction. In: Asian conference on computer vision, IEEE

  39. Maddalena L, Petrosino A (2015) Towards Benchmarking scene background initialization. In: International conference on image analysis and processing, Springer

  40. Miikkulainen R, Bednar JA, Choe Y, Sirosh J (2005) Computational maps in the visual cortex. Springer Sciencies Media Inc, New York

    Google Scholar 

  41. Allan H, Jean S (2003) A 3D-polar coordinate colour representation Pattern Recognition and Image Processing Group, Vienna University of Technology, Vienna

  42. Ramirez-Quintana JA, Chacon-Murguia MI (2013) Self-organizing retinotopic maps applied to background modeling for dynamic object segmentation in video sequences. In: International joint conference on neural networks

  43. Bors AG (2001) Introduction of the radial basis funcion (RBF) networks. Symposium Online for Electronics Engineers

  44. Bezdek JC, Ehrlich R, Full W (1984) FCM: the Fuzzy c-means clustering algorithm. Comput Geosci 10 (2–3):191–203

    Article  Google Scholar 

  45. Zhao G, Zhang C, Zheng L (2017) Intrusion detection using deep belief network and probabilistic neural network. In: International conference on computational science and engineering and international conference on embedded and ubiquitous computing

  46. Chacon-Murguia MI, Ramirez-Quintana J, Urias-Zavala D (2015) Segmentation of video background regions based on a DTCNN-clustering approach. Signal Image Video Process 9(1):135–144

    Article  Google Scholar 

  47. Hussain CA, Rao V, Praveen T (2013) Color histogram based image retrieval. Int J Adv Eng Technol IV/III:63–66

    Google Scholar 

  48. Benesty J, Chen J, Huang Y, Cohen I (2009) Pearson correlation coefficient Noise reduction in speech processing. Springer, Berlin

    Google Scholar 

  49. Cheng F-C, Huang S-C, Ruan S-J (2011) Illumination-sensitive background modeling approach for accurate moving object detection. IEEE Trans Broadcast 57(4):794–801

    Article  Google Scholar 

  50. Kaushal M, Khehra BS (2014) BBBCO And fuzzy entropy based modified background subtraction algorithm for object detection in videos. Appl Intell 41(1):117–127

    Article  Google Scholar 

  51. St-Charles P-L, Bilodeau G-A, Bergevin R (2015) SuBSENSE: a universal change detection method with local adaptive sensitivity. IEEE Trans Image Process 4(1):359–373

    Article  MathSciNet  Google Scholar 

  52. St-Charles P-L, Bilodeau G-A (2014) Improving background subtraction using local binary similarity patterns. In: Applications of computer vision (WACV), IEEE

  53. Wang Y, Luo Z, Jodoin P-M (2017) Interactive deep learning method for segmenting moving objects. Pattern Recogn Lett 96(1):66–75

    Article  Google Scholar 

  54. Laugraud B, Piérard S, Droogenbroeck MV (2016) LaBGen-p: a pixel-level stationary background generation method based on laBGen. In: Internaional conference on pattern recognition, IEEE

  55. Agarwala A, Dontcheva M, Agrawala M, Drucker S, Colburn A, Curless B, Salesin D, Cohen M (2004) Interactive digital photomontage. ACM Trans Graph 23(3):294–302

    Article  Google Scholar 

  56. Madalena L, Petrosino A (2016) Extracting a background image by a multi-modal scene background model. In: Internaional conference on pattern recognition, IEEE

  57. Javed S, Jung SK, Mahmood A, Bouwmans T (2016) Motion-aware graph regularized RPCA for background modeling of complex scene. In: Internaional conference on pattern recognition, IEEE

  58. Piccardi M (2004) Background subtraction techniques: a review. In: Internaional conference on systems, man and cybernetics, IEEE

  59. Minematsu T, Shimada A, Taniguchi R-I (2016) Background initialization based on bidirectional analysis and consensus voting. In: International conference on pattern recognition, IEEE

Download references

Acknowledgments

This research was funded by TecNM under grant 6418.18-P.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juan A. Ramirez-Quintana.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ramirez-Quintana, J.A., Chacon-Murguia, M.I. & Ramirez-Alonso, G.M. Adaptive background modeling of complex scenarios based on pixel level learning modeled with a retinotopic self-organizing map and radial basis mapping. Appl Intell 48, 4976–4997 (2018). https://doi.org/10.1007/s10489-018-1256-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-018-1256-5

Keywords

Navigation