Skip to main content
Log in

Object detection network pruning with multi-task information fusion

  • Published:
World Wide Web Aims and scope Submit manuscript

Abstract

In this work, we propose a novel channel pruning method for object detection. Most of existing works performing network pruning ignore the multi-task nature of object detection, i.e., object classification and localization. Based on this observation, we develop a Multi-task Information Fusion method for Channel Pruning (MIFCP). We design an attention module via group convolutions to help preserve the multi-task information extracted by the network backbone and the feature fusion layers. Meanwhile, a multi-task aware loss is proposed to evaluate the contribution of each channel to the final detection, according to which the top-k representative channels are preserved and the rest are pruned. Extensive experiments demonstrate the superiority of our method compared to the state-of-the-art methods. On the VOC2007 dataset, the pruned YOLOv3 by our method achieves 58.6 FPS at the pruning rate of 0.7, which only has 3% performance drop compared to the original one.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: Yolov4: optimal speed and accuracy of object detection. arXiv:https://arxiv.org/abs/2004.10934. Accessed 23 April 2020 (2020)

  2. Cai, Z., Vasconcelos, N.: Cascade r-cnn: delving into high quality object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6154–6162 (2018)

  3. Duan, K., Bai, S., Xie, L., Qi, H., Huang, Q., Tian, Q.: Centernet: keypoint triplets for object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6569–6578 (2019)

  4. Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 88 (2), 303–338 (2010)

    Article  Google Scholar 

  5. Ghiasi, G., Lin, T.Y., Le, Q.V.: Nas-fpn: learning scalable feature pyramid architecture for object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7036–7045 (2019)

  6. Girshick, R.: Fast r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448 (2015)

  7. He, Y., Zhang, X., Sun, J.: Channel pruning for accelerating very deep neural networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1389–1397 (2017)

  8. Hu, H., Peng, R., Tai, Y.W., Tang, C.K.: Network trimming: a data-driven neuron pruning approach towards efficient deep architectures. arXiv:https://arxiv.org/abs/1607.03250. Accessed 12 July 2016 (2016)

  9. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

  10. Law, H., Deng, J.: Cornernet: detecting objects as paired keypoints. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 734–750 (2018)

  11. Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning filters for efficient convnets. arXiv:https://arxiv.org/abs/1608.08710. Accessed 10 March 2017 (2016)

  12. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., et al.: Microsoft coco: common objects in context. In: European Conference on Computer Vision, pp. 740–755 (2014)

  13. Lin, T.Y., et al.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017)

  14. Liu, C., Wu, H.: Channel pruning based on mean gradient for accelerating convolutional neural networks. Signal Process. 156, 84–91 (2019)

    Article  Google Scholar 

  15. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: Ssd: single shot multibox detector. In: European Conference on Computer Vision, pp. 21–37 (2016)

  16. Liu, R., Chen, Y., Zhu, X., Hou, K.: Image classification using label constrained sparse coding. Multimed. Tools Appl. 75(23), 15619–15633 (2016)

    Article  Google Scholar 

  17. Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.: Learning efficient convolutional networks through network slimming. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2736–2744 (2017)

  18. Liu, S., Qi, L., Qin, H., Shi, J., Jia, J.: Path aggregation network for instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8759–8768 (2018)

  19. Liu, X., Li, W., Huo, J., Yao, L., Gao, Y.: Layerwise sparse coding for pruned deep neural networks with extreme compression ratio. Proceedings of the AAAI Conference on Artificial Intelligence 34(04), 4900–4907 (2020)

    Article  Google Scholar 

  20. Liu, R., Yang, R., Li, S., Shi, Y., Jin, X.: Painting completion with generative translation models. Multimed. Tools Appl. 79(21), 14375–14388 (2020)

    Article  Google Scholar 

  21. Liu, R., Wang, X., Lu, H., Wu, Z., Fan, Q., Li, S., Jin, X.: SCCGAN: style and characters inpainting based on CGAN. Mob. Netw. Appl. 26(1), 3–12 (2021)

    Article  Google Scholar 

  22. Lu, H., Li, Y., Mu, S., Wang, D., Kim, H., Serikawa, S.: Motor anomaly detection for unmanned aerial vehicles using reinforcement learning. IEEE Internet Things J. 5(4), 2315–2322 (2017)

    Article  Google Scholar 

  23. Lu, H., Zhang, Y., Li, Y., Jiang, C., Abbas, H.: User-oriented virtual mobile network resource management for vehicle communications. IEEE Trans. Intell. Transp. Syst., 1–12 (2020)

  24. Luo, J.H., Wu, J., Lin, W.: Thinet: a filter level pruning method for deep neural network compression. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5058–5066 (2017)

  25. Mao, Q.C., Sun, H.M., Liu, Y.B., Jia, R.S.: Mini-yolov3: real-time object detector for embedded applications. IEEE Access 133529–133538, 7 (2019)

    Google Scholar 

  26. Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: European Conference on Computer Vision, pp. 483–499 (2016)

  27. Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.: Pytorch: an imperative style, high-performance deep learning library. arXiv:https://arxiv.org/abs/1912.01703. Accessed 3 Dec 2019 (2019)

  28. Qin, Z., Li, Z., Zhang, Z., Bao, Y., Yu, G., Peng, Y., Sun, J.: Thundernet: towards real-time generic object detection on mobile devices. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6718–6727 (2019)

  29. Redmon, J., Farhadi, A.: Yolov3: an incremental improvement. arXiv:https://arxiv.org/abs/1804.02767. Accessed 8 April 2018 (2018)

  30. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:https://arxiv.org/abs/1409.1556. Accessed 10 April 2015 (2014)

  31. Singh, B., Davis, L.S.: An analysis of scale invariance in object detection snip. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3578–3587 (2018)

  32. Tan, M., Pang, R., Le, Q.V.: Efficientdet: scalable and efficient object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10781–10790 (2020)

  33. Tychsen-Smith, L., Petersson, L.: Improving object localization with fitness nms and bounded iou loss. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6877–6885 (2018)

  34. Womg, A., Shafiee, M.J., Li, F., Chwyl, B.: Tiny Ssd: a tiny single-shot detection deep convolutional neural network for real-time embedded object detection. In: 2018 15th Conference on Computer and Robotvision (CRV), pp. 95–101 (2018)

  35. Wong, A., Famuori, M., Shafiee, M.J., Li, F., Chwyl, B., Chung, J.: Yolo nano: a highly compact you only look once convolutional neural network for object detection. arXiv:https://arxiv.org/abs/1910.01271. Accessed 3 Oct 2019 (2019)

  36. Woo, S., Park, J., Lee, J.Y., Kweon, I.S.: Cbam: convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19 (2018)

  37. Xie, Z., Zhu, L., Zhao, L., Tao, B., Liu, L., Tao, W.: Localization-aware channel pruning for object detection. Neurocomputing 403, 400–408 (2020)

    Article  Google Scholar 

  38. Ye, J., Lu, X., Lin, Z., Wang, J.Z.: Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers. arXiv:https://arxiv.org/abs/1802.00124. Accessed 1 Feb 2018 (2018)

  39. Yu, Y., Wu, J., Huang, L.: Double quantization for communication-efficient distributed optimization. arXiv:https://arxiv.org/abs/1805.10111. Accessed 26 May 2019 (2018)

  40. Zhang, S., Wen, L., Bian, X., Lei, Z., Li, S.Z.: Single-shot refinement neural network for objectdetection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4203–4212 (2018)

  41. Zhang, P., Zhong, Y., Li, X.: Slimyolov3: narrower, faster and better for real-time Uav applications. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop, pp. 37–45 (2019)

  42. Zhao, Q., Sheng, T., Wang, Y., Tang, Z., Chen, Y., Cai, L., Ling, H.: M2det: a single-shot object detector based on multi-level feature pyramid network. Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 9259–9266 (2019)

    Article  Google Scholar 

  43. Zheng, Z., Wang, P., Liu, W., Li, J., Ye, R., Ren, D.: Distance-iou loss: faster and better learning for bounding box regression. Proceedings of the AAAI Conference on Artificial Intelligence 34(07), 12993–13000 (2020)

    Article  Google Scholar 

  44. Zhuang, Z., Tan, M., Zhuang, B., Liu, J., Guo, Y., Wu, Q., Huang, J., Zhu, J.-H.: Discrimination-aware channel pruning for deep neural networks. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems (NeurIPS), pp. 883–894 (2018)

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 61972064), LiaoNing Revitalization Talents Program (No. XLYC1806006), and the Science and Technology Innovation Foundation of Dalian (No. 2020JJ26GX036).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lin Feng.

Ethics declarations

Conflict of Interests

The authors declare that they have no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the Topical Collection: Special Issue on Synthetic Media on the Web Guest Editors: Huimin Lu, Xing Xu, Jože Guna, and Gautam Srivastava

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, S., Xue, L., Feng, L. et al. Object detection network pruning with multi-task information fusion. World Wide Web 25, 1667–1683 (2022). https://doi.org/10.1007/s11280-021-00991-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11280-021-00991-3

Keywords

Navigation