Skip to main content
Log in

Bounding convolutional network for refining object locations

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Object detection, an important task in computer vision, has achieved a conspicuous improvement. Among the methods for object detection, the one-stage detector is a simple, end-to-end, anchor-free, and straightforward deep learning pipeline. Many one-stage detectors locate the bounding box using regression, and the regression loss is the maximum among all errors. Hence, locating the bounding box accurately is one of the keys for improving the average precision (AP) for a detector. In this paper, we suggest a simple and precise locator named the bounding convolutional network (BoundConvNet) to draw “bounding features” from heatmaps to refine the object locations and apply a category-aware collaborative intersection over union (Co-IoU) loss function to optimize the bounding box regression for dealing with a problem of different class center point overlap. BoundConvNet is a head network for bounding box regression, which contains several depthwise separable dilated convolutional layers to decouple the classification task from the regression task. Extensive experiments demonstrate that BoundConvNet improves the AP of the one-stage detector CenterNet and helps the CenterNet mark the bounding box of objects more accurately. For small object detection, the AP of CenterNet is improved by 13.8% relative on MS COCO dataset with ResNet-18 as backbone.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data availability

Some or all data, models or code generated or used during the study are available from the first author and the corresponding author on reasonable request.

References

  1. Szegedy C, Toshev A, Erhan D (2013) Deep neural networks for object detection. l2013 - research. Google

  2. Li Y, Qi H, Dai J, Ji X, Wei Y (2017) Fully convolutional instance-aware semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2359–2367

  3. Kompella A, Kulkarni RV (2021) A semi-supervised recurrent neural network for video salient object detection. Neural Comput Appl 33(6):2065–2083

    Article  Google Scholar 

  4. Chawla A, Yin H, Molchanov P, Alvarez J (2021) Data-free knowledge distillation for object detection. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp 3289–3298

  5. Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 580–587

  6. He K, Zhang X, Ren S, Sun J (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37(9):1904–1916

    Article  Google Scholar 

  7. Girshick R (2015) Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision, pp 1440–1448

  8. Ren S, He K, Girshick R, Sun J (2015) Faster r-cnn: Towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst 28:91–99

    Google Scholar 

  9. Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 779–788

  10. Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, Berg AC (2016) Ssd: Single shot multibox detector. In: European conference on computer vision, Springer, pp 21–37

  11. Zhou X, Wang D, Krähenbühl P (2019) Objects as points. arXiv preprint arXiv:1904.07850

  12. Tian Z, Shen C, Chen H, He T (2019) Fcos: Fully convolutional one-stage object detection. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 9627–9636

  13. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  14. Yu F, Wang D, Shelhamer E, Darrell T (2018) Deep layer aggregation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2403–2412

  15. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  16. Uijlings JR, Van De Sande KE, Gevers T, Smeulders AW (2013) Selective search for object recognition. Int J Comput Vision 104(2):154–171

    Article  Google Scholar 

  17. Felzenszwalb P, McAllester D, Ramanan D (2008) A discriminatively trained, multiscale, deformable part model. In: 2008 IEEE conference on computer vision and pattern recognition, pp 1–8

  18. Wang X, Chen K, Huang Z, Yao C, Liu W (2017) Point linking network for object detection. arXiv preprint arXiv:1706.03646

  19. Law H, Deng J (2018) Cornernet: Detecting objects as paired keypoints. In: Proceedings of the European conference on computer vision (ECCV), pp 734–750

  20. Zhou X, Zhuo J, Krahenbuhl P (2019) Bottom-up object detection by grouping extreme and center points. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 850–859

  21. Qiu H, Ma Y, Li Z, Liu S, Sun J (2020) Borderdet: Border feature for dense object detection. In: European conference on computer vision, Springer, pp 549–564

  22. Yang Z, Liu S, Hu H, Wang L, Lin S (2019) Reppoints: point set representation for object detection. In: Proceedings of the IEEE/CVF international conference on computer vision, pp. 9657–9666

  23. Vu T, Jang H, Pham TX, Yoo CD (2019) Cascade rpn: Delving into high-quality region proposal network with adaptive convolution. arXiv preprint arXiv:1909.06720

  24. Zhu X, Hu H, Lin S, Dai J (2019) Deformable convnets v2: More deformable, better results. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9308–9316

  25. Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, pp 248–255

  26. Lin T-Y, Dollár P, Girshick R, He K, Hariharan B, Belongie S (2017) Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2117–2125

  27. Lin T-Y, Goyal P, Girshick R, He K, Dollár P (2017) Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision, pp 2980–2988

  28. Wu Y, Chen Y, Yuan L, Liu Z, Wang L, Li H, Fu Y (2020) Rethinking classification and localization for object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 10186–10195

  29. Newell A, Yang K, Deng J (2016) Stacked hourglass networks for human pose estimation. In: European conference on computer vision, Springer, pp 483–499

  30. Lan S, Ren Z, Wu Y, Davis LS, Hua G (2020) Saccadenet: a fast and accurate object detector. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 10397–10406

  31. Park H-J, Choi Y-J, Lee Y-W, Kim B-G (2022) ssfpn: Scale sequence (s\(^{\hat{}}{2}\)) feature based-feature pyramid network for object detection. arXiv e-prints, 2208

  32. Chollet F (2017) Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1251–1258

  33. He Y, Zhu C, Wang J, Savvides M, Zhang X (2019) Bounding box regression with uncertainty for accurate object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 2888–2897

  34. Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco: Common objects in context. In: European conference on computer vision, Springer, pp 740–755

  35. Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A (2010) The pascal visual object classes (voc) challenge. Int J Comput Vision 88(2):303–338

    Article  Google Scholar 

  36. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861

  37. Yu F, Koltun V (2015) Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122

  38. Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141

  39. Zhang W, Fu C, Xie H, Zhu M, Tie M, Chen J (2021) Global context aware rcnn for object detection. Neural Comput Appl 33(18):11627–11639

    Article  Google Scholar 

  40. Nguyen N-D, Do T, Ngo TD, Le D-D (2020) An evaluation of deep learning methods for small object detection. J Electr Comput Eng 2020:1–18

    Article  Google Scholar 

  41. Vandenhende S, Georgoulis S, Van Gansbeke W, Proesmans M, Dai D, Van Gool L (2021) Multi-task learning for dense prediction tasks: A survey. IEEE Trans Pattern Anal Mach Intell 44(7):3614–3633

    Google Scholar 

  42. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980

  43. Wang X, Zhu D, Yan Y (2022) Towards efficient detection for small objects via attention-guided detection network and data augmentation. Sensors 22(19):7663

    Article  Google Scholar 

  44. Farhadi A, Redmon J (2018) Yolov3: an incremental improvement. In: Computer vision and pattern recognition, vol. 1804. Springer Berlin/Heidelberg, Germany, p 1–6

  45. Bochkovskiy A, Wang C-Y, Liao H-YM (2020) Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934

  46. Zhang S, Wen L, Bian X, Lei Z, Li SZ (2018) Single-shot refinement neural network for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4203–4212

  47. Ge Z, Liu S, Wang F, Li Z, Sun J (2021) Yolox: Exceeding yolo series in 2021. arXiv preprint arXiv:2107.08430

  48. Xu S, Wang X, Lv W, Chang Q, Cui C, Deng K, Wang G, Dang Q, Wei S, Du Y, et al (2022) Pp-yoloe: An evolved version of yolo. arXiv preprint arXiv:2203.16250

  49. Glenn J (2022) YOLOv5 release v6.1. https://github.com/ ultralytics/yolov5/releases/tag/v6.1/

  50. Wang C-Y, Yeh I-H, Liao H-YM (2021) You only learn one representation: Unified network for multiple tasks. arXiv preprint arXiv:2105.04206

  51. Wang C-Y, Bochkovskiy A, Liao H-YM (2022) Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv:2207.02696

  52. Chicco D, Jurman G (2020) The advantages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation. BMC Genom 21:1–13

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the Science and Technology Development Fund (FDCT) of Macau under Grant No. 0071/2022/A.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenmin Wang.

Ethics declarations

Conflict of interest

The authors declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, S., Wang, W., Li, H. et al. Bounding convolutional network for refining object locations. Neural Comput & Applic 35, 19297–19313 (2023). https://doi.org/10.1007/s00521-023-08782-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-023-08782-9

Keywords

Navigation