Skip to main content

Dilated CNN Based Human Verifier for Intrusion Detection

  • Conference paper
  • First Online:
Frontiers of Computer Vision (IW-FCV 2020)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1212))

Included in the following conference series:

  • 753 Accesses

Abstract

This paper proposes an intrusion detection algorithm for intelligent surveillance systems. The algorithm detects an intrusion threat via a dual-stage computer vision algorithm. In the first stage, the input of video sequences passes through a probabilistic change detector based on Gaussian Mixture Models to segment intruders from the background. The extracted foreground region is then passed through the second stage to verify if it is human. The second stage is based on a shallow convolutional neural network (CNN) employing dilated convolution. The system sends an alert if there is intrusion detected. The algorithm is validated and compared with a top-ranked change detection algorithms. It outperformed the compared algorithm on the i-LIDS dataset of sterile zone monitoring.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Shahbaz, A., Kurnianggoro, L., Wahyono, Jo, K.H.: Recent advances in the field of foreground detection: an overview. In: Król, D., Nguyen, N., Shirai, K. (eds) Advanced Topics in Intelligent Information and Database Systems (ACIIDS 2017). SCI, vol. 710, pp. 261–269. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-56660-3_23

  2. Shahbaz, A., Hariyono, J., Jo, K.H.: Evaluation of background subtraction algorithms for video surveillance. In: 2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV), pp. 1–4, January 2015

    Google Scholar 

  3. Bouwmans, T., El Baf, F., Vachon, B.: Statistical background modeling for foreground detection: a survey. In: Handbook of Pattern Recognition and Computer Vision, pp. 181–199. World Scientific Publishing, January 2010

    Google Scholar 

  4. Shahbaz, A., Jo, K.: Probabilistic foreground detector for sterile zone monitoring. In: 2015 12th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), pp. 199–201, October 2015. https://doi.org/10.1109/URAI.2015.7358868

  5. Kurnianggoro, L., Shahbaz, A., Jo, K.: Dense optical flow in stabilized scenes for moving object detection from a moving camera. In: 2016 16th International Conference on Control, Automation and Systems (ICCAS), pp. 704–708, October 2016. https://doi.org/10.1109/ICCAS.2016.7832395

  6. Shahbaz, A., Hernández, C.D., Filonenko, A., Hariyono, J., Jo, K.: Probabilistic foreground detector with camouflage detection for sterile zone monitoring. In: 2016 IEEE 25th International Symposium on Industrial Electronics (ISIE), pp. 997–1001, June 2016. https://doi.org/10.1109/ISIE.2016.7745027

  7. KaewTraKulPong, P., Bowden, R.: An improved adaptive background mixture model for real-time tracking with shadow detection. In: Remagnino, P., Jones, G.A., Paragios, N., Regazzoni, C.S. (eds.) Video-Based Surveillance Systems, pp. 135–144. Springer, Boston (2002). https://doi.org/10.1007/978-1-4615-0913-4_11

    Chapter  Google Scholar 

  8. St-Charles, P.L., Bilodeau, G.A., Bergevin, R.: SuBSENSE: a universal change detection method with local adaptive sensitivity. IEEE Trans. Image Process. 24(1), 359–373 (2015)

    Article  MathSciNet  Google Scholar 

  9. St-Charles, P., Bilodeau, G., Bergevin, R.: Universal background subtraction using word consensus models. IEEE Trans. Image Process. 25(10), 4768–4781 (2016)

    Article  MathSciNet  Google Scholar 

  10. Jiang, S., Lu, X.: WeSamBE: a weight-sample-based method for background subtraction. IEEE Trans. Circuits Syst. Video Technol. 28(9), 2105–2115 (2018)

    Article  Google Scholar 

  11. Wang, R., Bunyak, F., Seetharaman, G., Palaniappan, K.: Static and moving object detection using flux tensor with split Gaussian models. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 420–424, June 2014

    Google Scholar 

  12. Bianco, S., Ciocca, G., Schettini, R.: Combination of video change detection algorithms by genetic programming. IEEE Trans Evol. Comput. 21(6), 914–928 (2017)

    Article  Google Scholar 

  13. Braham, M., Droogenbroeck, M.V.: Deep background subtraction with scene-specific convolutional neural networks. In: 2016 International Conference on Systems, Signals and Image Processing (IWSSIP), pp. 1–4, May 2016

    Google Scholar 

  14. Babaee, M., Dinh, D.T., Rigoll, G.: A deep convolutional neural network for video sequence background subtraction. Pattern Recogn. 76(C), 635–649 (2018)

    Article  Google Scholar 

  15. Wang, Y., Luo, Z., Jodoin, P.M.: Interactive deep learning method for segmenting moving objects. Pattern Recogn. Lett. 96(C), 66–75 (2017)

    Article  Google Scholar 

  16. Braham, M., Piérard, S., Van Droogenbroeck, M.: Semantic background subtraction. In: IEEE International Conference on Image Processing (ICIP), Beijing, China, pp. 4552–4556, September 2017. https://doi.org/10.1109/ICIP.2017.8297144. http://hdl.handle.net/2268/213419

  17. Wang, Y., Jodoin, P.M., Porikli, F., Konrad, J., Benezeth, Y., Ishwar, P.: CDnet 2014: an expanded change detection benchmark dataset. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 393–400, June 2014

    Google Scholar 

  18. Chollet, F., et al.: Keras (2015). https://github.com/fchollet/keras

  19. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR. abs/1409.1556 (2014)

    Google Scholar 

  20. H. O. S. D. Branch: Imagery library for intelligent detection systems (i-LIDS). In: The Institution of Engineering and Technology Conference on Crime and Security, pp. 445–448, June 2006

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ajmal Shahbaz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shahbaz, A., Jo, KH. (2020). Dilated CNN Based Human Verifier for Intrusion Detection. In: Ohyama, W., Jung, S. (eds) Frontiers of Computer Vision. IW-FCV 2020. Communications in Computer and Information Science, vol 1212. Springer, Singapore. https://doi.org/10.1007/978-981-15-4818-5_8

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-4818-5_8

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-4817-8

  • Online ISBN: 978-981-15-4818-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics