Skip to main content
Log in

ID-Net: an improved mask R-CNN model for intrusion detection under power grid surveillance

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Intrusion detection is a crucial task in power grid surveillance system by providing early warning for power grid security. Construction machinery and engineering vehicles, as the most common intrusion objects, have become the major concern for preventing external damages in power grid maintenance. In this paper, by considering the diversity of scales of intrusion objects and complexity of application scenarios under power grid surveillance, we compiled a dataset which contains 8177 images captured by 653 different power grid surveillance cameras. Based on this dataset, we proposed an improved context-aware mask region-based convolutional neural network (Mask R-CNN) model, namely ID-Net, for intrusion object detection. A modulated deformable convolutional operation is integrated into the backbone network for learning robust feature representations from geometric variations in engineering vehicles. By considering the correlation between objects and their context, a self-attention-based module is leveraged for long-range context relation modeling. For small objects detection, a feature integration module is applied for multi-scale feature fusion under a pyramid hierarchy. Then, a cascaded coarse-to-fine region proposal network is incorporated for progressively refining the bounding box location regression. Experimental results have demonstrated that our model can achieve competitive performance in comparison with state-of-the-art object detection methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Xiang X, Lv N, Guo X, Wang S, El Saddik AJS (2018) Engineering vehicles detection based on modified faster R-Cnn for power grid surveillance. Sensors 18(7):2258

    Article  Google Scholar 

  2. Chen S, Wen H, Wu J et al (2019) Internet of things based smart grids supported by intelligent edge computing. IEEE Access 7:74089–74102

    Article  Google Scholar 

  3. He K, Gkioxari G, Dollár P, Girshick R (2017) Mask R-Cnn. In: IEEE international conference on computer vision, pp 2961–2969

  4. Zhu X, Hu H, Lin S, Dai J (2019) Deformable convnets V2: more deformable, better results. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9308–9316

  5. Wang X, Girshick R, Gupta A, He K (2018) Non-local neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7794–7803

  6. LeCun Y, Bengio Y, Hinton GJN (2015) Deep learning. Nature 521(7553):436–444

    Article  Google Scholar 

  7. Girshick R (2015) Fast R-CNN. In: IEEE international conference on computer vision, pp 1440–1448

  8. Ren S, He K, Girshick R, Sun J (2015) Faster R-CNN: towards real-time object detection with region proposal networks. In: Neural information processing systems, pp 91–99

  9. Lin T-Y, Dollár P, Girshick R, He K, Hariharan B, Belongie S (2017) Feature pyramid networks for object detection. In: IEEE conference on computer vision and pattern recognition, pp 2117–2125

  10. Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: IEEE conference on computer vision and pattern recognition, pp 580–587

  11. He K, Zhang X, Ren S, Sun J (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37(9):1904–1916

    Article  Google Scholar 

  12. Dai J, Li Y, He K, Sun J (2016) R-FCN: object detection via region-based fully convolutional networks. In: Neural information processing systems, pp 379–387

  13. Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection. In: IEEE conference on computer vision and pattern recognition, pp 779–788

  14. Redmon J, Farhadi A (2017) Yolo9000: better, faster, stronger. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7263–7271

  15. Liu W, Anguelov D, Erhan D et al (2016) SSD: single shot multibox detector. In: European conference on computer vision. Springer, pp 21–37

  16. Redmon J, Farhadi A (2018) Yolov3: an incremental improvement. arXiv preprint arXiv: 1804.02767

  17. Fu C-Y, Liu W, Ranga A, Tyagi A, Berg AC (2017) DSSD: deconvolutional single shot detector

  18. Shrivastava A, Sukthankar R, Malik J, Gupta AJ (2016) Beyond skip connections: top-down modulation for object detection

  19. Lin T-Y, Goyal P, Girshick RB, He K, Dollár P (2017) Focal loss for dense object detection. Presented at the international conference on computer vision

  20. Liu L, Ouyang W, Wang X et al (2020) Deep learning for generic object detection: a survey. Int J Comput Vis 128(2):261–318

    Article  Google Scholar 

  21. Law H, Deng J (2018) Cornernet: detecting objects as paired keypoints. In: Proceedings of the European conference on computer vision (ECCV), pp 734–750

  22. Duan K, Bai S, Xie L, Qi H, Huang Q, Tian Q (2019) Centernet: keypoint triplets for object detection. In: Proceedings of the IEEE international conference on computer vision, pp 6569–6578

  23. Newell A, Yang K, Deng J (2016) Stacked hourglass networks for human pose estimation. In: European conference on computer vision. Springer, pp 483–499

  24. Chouchene A, Carvalho A, Lima TM, Charrua-Santos F, Osório GJ, Barhoumi W (2020) Artificial intelligence for product quality inspection toward smart industries: quality control of vehicle non-conformities. In: 2020 9th international conference on industrial technology and management (ICITM), pp 127–131. IEEE

  25. Zhang H, Li D, Ji Y, Zhou H, Wu W, Liu KJIToII (2019) Towards new retail: a benchmark dataset for smart unmanned vending machines

  26. Chang M-C, Chiang C-K, Tsai C-M et al (2020) Ai city challenge 2020-computer vision for smart transportation applications. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp 620–621

  27. Nguyen VN, Jenssen R, Roverso D (2018) Automatic autonomous vision-based power line inspection: a review of current status and the potential role of deep learning. Int J Electr Power Energy Syst 99:107–120

    Article  Google Scholar 

  28. Sang J, Wu Z, Guo P et al (2018) An improved Yolov2 for vehicle detection. Sensors 18(12):4272

    Article  Google Scholar 

  29. Kim K-J, Kim P-K, Chung Y-S, Choi D-H (2018) Performance enhancement of Yolov3 by adding prediction layers with spatial pyramid pooling for vehicle detection. In: 2018 15th IEEE international conference on advanced video and signal based surveillance (AVSS). IEEE, pp. 1–6

  30. Dai J, Qi H, Xiong Y et al (2017) Deformable convolutional networks. In: IEEE international conference on computer vision, pp 764–773

  31. Pang J, Chen K, Shi J, Feng H, Ouyang W, Lin D (2019) Libra R-CNN: towards balanced learning for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 821–830

  32. Uijlings JR, Van De Sande KE, Gevers T, Smeulders AW (2013) Selective search for object recognition. Int J Comput Vis 104(2):154–171

    Article  Google Scholar 

  33. Cai Z, Vasconcelos N (2018) Cascade R-CNN: delving into high quality object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6154–6162

  34. Dutta A, Zisserman A (2019) The via annotation software for images, audio and video. In: Proceedings of the 27th ACM international conference on multimedia, pp 2276–2279

  35. Lin T-Y, Maire M, Belongie S, et al (2014) Microsoft coco: common objects in context. In: European conference on computer vision. Springer, pp 740–755

  36. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  37. Chen K, Wang J, Pang J et al (2019) Mmdetection: open mmlab detection toolbox and benchmark. CoRR, vol. abs/1906.07155

  38. Zhang Y, Li X, Lin M, Chiu B, Zhao M (2020) Deep-recursive residual network for image semantic segmentation. In: Neural computing and applications, pp 12935–12947

Download references

Acknowledgements

This work was supported in part by the National Key R&D Program of China under Grant Nos. 2018YFB1003800, 2018YFB1003805, the National Natural Science Foundation of China under Grant No. 61572156 and No. 61832004, the Shenzhen Science and Technology Program under Grant No. JCYJ20170413105929681 and the project of State Grid Shaanxi Electrical Power Company under Grant No. FWZ-ZB-GWSNDL20-02-86 and No. 5226KY19004Y.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yuzhu Ji or Biao Yang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gao, F., Ji, S., Guo, J. et al. ID-Net: an improved mask R-CNN model for intrusion detection under power grid surveillance. Neural Comput & Applic 33, 9241–9257 (2021). https://doi.org/10.1007/s00521-021-05688-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-05688-2

Keywords

Navigation