Skip to main content

Efficient Multi-Receptive Pooling for Object Detection on Drone

  • Conference paper
  • First Online:
Frontiers of Computer Vision (IW-FCV 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1857))

Included in the following conference series:

  • 153 Accesses

Abstract

Object detection is the most fundamental and important research in computer vision to discriminate the location and class of the object in the image. This technology has been continuously researched for the past few years. Recently, with the development of hardware such as GPU computing power and cameras, object detection technology is gradually improving. However, there are many difficulties in utilizing GPUs on low-cost devices such as drones. Therefore, efficient deep learning technology that can operate on low-cost devices is needed. In this paper, we propose a deep learning model to enable real-time object detection on a low-cost device. We experiment to reduce the amount of computation and improve speed by modifying the CSP Bottleneck and SPPF parts corresponding to the backbone of YOLOv5. The model has been trained on MS COCO and VisDrone datasets, and the mAP values are measured at 0.364 mAP and 0.19 mAP, which are about 0.07 and 0.04 higher than Refinedetlite and Refinedet, respectively. The speed is 23.010 frames per second on the CPU configuration, which is enough for real-time object detection.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. An, J., Putro, M.D., Jo, K.H.: Efficient residual bottleneck for object detection on CPU. In: 2022 International Workshop on Intelligent Systems (IWIS), pp. 1–4. IEEE (2022)

    Google Scholar 

  2. Bera, B., Das, A.K., Garg, S., Piran, M.J., Hossain, M.S.: Access control protocol for battlefield surveillance in drone-assisted IoT environment. IEEE Internet Things J. 9(4), 2708–2721 (2021)

    Article  Google Scholar 

  3. Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: YOLOv4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934 (2020)

  4. Cai, Z., Vasconcelos, N.: Cascade R-CNN: delving into high quality object detection. CoRR abs/1712.00726 (2017). arXiv:1712.00726

  5. Cai, Z., Vasconcelos, N.: Cascade R-CNN: high quality object detection and instance segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 43(5), 1483–1498 (2021). https://doi.org/10.1109/TPAMI.2019.2956516

    Article  Google Scholar 

  6. Chakraborty, S., Aich, S., Kumar, A., Sarkar, S., Sim, J.S., Kim, H.C.: Detection of cancerous tissue in histopathological images using dual-channel residual convolutional neural networks (DCRCNN). In: 2020 22nd International Conference on Advanced Communication Technology (ICACT), pp. 197–202 (2020). https://doi.org/10.23919/ICACT48636.2020.9061289

  7. Chen, C., Liu, M., Meng, X., Xiao, W., Ju, Q.: RefineDetLite: a lightweight one-stage object detection framework for CPU-only devices. CoRR abs/1911.08855 (2019). arXiv:1911.08855

  8. Girshick, R.: Fast R-CNN. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448. IEEE (2015)

    Google Scholar 

  9. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587. IEEE (2014)

    Google Scholar 

  10. He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015)

    Article  Google Scholar 

  11. Ikshwaku, S., Srinivasan, A., Varghese, A., Gubbi, J.: Railway corridor monitoring using deep drone vision. In: Verma, N., Ghosh, A. (eds.) Computational Intelligence: Theories, Applications and Future Directions - Volume II. Advances in Intelligent Systems and Computing, vol. 799, pp. 361–372. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-1135-2_28

  12. Jocher, G., Stoken, A., Borovec, J.: ultralytics/yolov5: v3.0. https://doi.org/10.5281/zenodo.3983579

  13. Kim, M., Jeong, J., Kim, S.: ECAP-YOLO: efficient channel attention pyramid yolo for small object detection in aerial image. Remote Sens. 13(23), 4851 (2021)

    Article  Google Scholar 

  14. Law, H., Deng, J.: CornerNet: detecting objects as paired keypoints. CoRR abs/1808.01244 (2018). arXiv:1808.01244

  15. Li, Y., Li, J., Lin, W., Li, J.: Tiny-DSOD: lightweight object detection for resource-restricted usages. CoRR abs/1807.11013 (2018). arXiv:1807.11013

  16. Li, Z., Peng, C., Yu, G., Zhang, X., Deng, Y., Sun, J.: Light-head R-CNN: in defense of two-stage object detector. CoRR abs/1711.07264 (2017). arXiv:1711.07264

  17. Li, Z., Peng, C., Yu, G., Zhang, X., Deng, Y., Sun, J.: DetNet: a backbone network for object detection. CoRR abs/1804.06215 (2018). arXiv:1804.06215

  18. Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 936–944. IEEE (2017)

    Google Scholar 

  19. Lin, T., Dollár, P., Girshick, R.B., He, K., Hariharan, B., Belongie, S.J.: Feature pyramid networks for object detection. CoRR abs/1612.03144 (2016). arXiv:1612.03144

  20. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell. 42(2), 318–327 (2018)

    Article  Google Scholar 

  21. Lin, T., Goyal, P., Girshick, R.B., He, K., Dollár, P.: Focal loss for dense object detection. CoRR abs/1708.02002 (2017). arXiv:1708.02002

  22. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  23. Liu, S., Qi, L., Qin, H., Shi, J., Jia, J.: Path aggregation network for instance segmentation. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8759–8768. IEEE (2018)

    Google Scholar 

  24. Liu, W., et al.: SSD: single shot multibox detector. CoRR abs/1512.02325 (2015). arXiv:1512.02325

  25. Murthy, C.B., Hashmi, M.F.: Real time pedestrian detection using robust enhanced YOLOv3. In: 2020 21st International Arab Conference on Information Technology (ACIT), pp. 1–5. IEEE (2020)

    Google Scholar 

  26. Pasqualino, G., Furnari, A., Signorello, G., Farinella, G.M.: An unsupervised domain adaptation scheme for single-stage artwork recognition in cultural sites. Image Vis. Comput. 107, 104098 (2021). https://doi.org/10.1016/j.imavis.2021.104098

    Article  Google Scholar 

  27. Putro, M.D., Nguyen, D.L., Priadana, A., Jo, K.H.: Fast person detector with efficient multi-level contextual block for supporting assistive robot. In: 2022 IEEE 5th International Conference on Industrial Cyber-Physical Systems (ICPS), pp. 1–6. IEEE (2022)

    Google Scholar 

  28. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788. IEEE (2016)

    Google Scholar 

  29. Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. CoRR abs/1804.02767 (2018). arXiv:1804.02767

  30. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2016)

    Article  Google Scholar 

  31. Sambolek, S., Ivasic-Kos, M.: Automatic person detection in search and rescue operations using deep CNN detectors. IEEE Access 9, 37905–37922 (2021)

    Article  Google Scholar 

  32. Sandler, M., Howard, A.G., Zhu, M., Zhmoginov, A., Chen, L.: Inverted residuals and linear bottlenecks: mobile networks for classification, detection and segmentation. CoRR abs/1801.04381 (2018). arXiv:1801.04381

  33. Sibanyoni, S.V., Ramotsoela, D.T., Silva, B.J., Hancke, G.P.: A 2-D acoustic source localization system for drones in search and rescue missions. IEEE Sens. J. 19(1), 332–341 (2018)

    Article  Google Scholar 

  34. Sun, W., Dai, L., Zhang, X., Chang, P., He, X.: RSOD: real-time small object detection algorithm in UAV-based traffic monitoring. Appl. Intell. 52(8), 8448–8463 (2022). https://doi.org/10.1007/s10489-021-02893-3

    Article  Google Scholar 

  35. Wang, C.Y., Liao, H.Y.M., Wu, Y.H., Chen, P.Y., Hsieh, J.W., Yeh, I.H.: CSPNet: a new backbone that can enhance learning capability of CNN. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1571–1580. IEEE (2020)

    Google Scholar 

  36. Wang, C., Liao, H.M., Yeh, I., Wu, Y., Chen, P., Hsieh, J.: CSPNet: a new backbone that can enhance learning capability of CNN. CoRR abs/1911.11929 (2019). arXiv:1911.11929

  37. Wang, R.J., Li, X., Ao, S., Ling, C.X.: Pelee: a real-time object detection system on mobile devices. CoRR abs/1804.06882 (2018). arXiv:1804.06882

  38. Wang, X., Li, W., Guo, W., Cao, K.: SPB-YOLO: an efficient real-time detector for unmanned aerial vehicle images. In: 2021 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), pp. 099–104. IEEE (2021)

    Google Scholar 

  39. Zhang, J., Wang, P., Zhao, Z., Su, F.: Pruned-YOLO: learning efficient object detector using model pruning. In: Farkaš, I., Masulli, P., Otte, S., Wermter, S. (eds.) ICANN 2021. LNCS, vol. 12894, pp. 34–45. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86380-7_4

    Chapter  Google Scholar 

  40. Zhang, S., Wen, L., Bian, X., Lei, Z., Li, S.Z.: Single-shot refinement neural network for object detection. CoRR abs/1711.06897 (2017). arXiv:1711.06897

  41. Zhu, P., et al.: Detection and tracking meet drones challenge. IEEE Trans. Pattern Anal. Mach. Intell. 44(11), 7380–7399 (2021)

    Article  Google Scholar 

Download references

Acknowledgement

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the government (MSIT). (No. 2020R1A2C2008972).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kang-Hyun Jo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

An, J., Putro, M.D., Priadana, A., Jo, KH. (2023). Efficient Multi-Receptive Pooling for Object Detection on Drone. In: Na, I., Irie, G. (eds) Frontiers of Computer Vision. IW-FCV 2023. Communications in Computer and Information Science, vol 1857. Springer, Singapore. https://doi.org/10.1007/978-981-99-4914-4_2

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-4914-4_2

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-4913-7

  • Online ISBN: 978-981-99-4914-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics