Skip to main content
Log in

ST-YOLOX: a lightweight and accurate object detection network based on Swin Transformer

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

With the rapid development of artificial intelligence and Internet of Things (IoT) technology, increasingly edge devices have entered people’s daily lives. However, due to the limited performance of edge devices, complex models can affect the response speed and efficiency of the whole system. Existing research still cannot simultaneously satisfy the demand for accuracy and response speed of edge devices. This paper proposes a lightweight and highly accurate object detection model that uses the Transformer to address edge devices’ limited computational capacity and storage space. Specifically, the proposed model adopts the Swin Transformer for multi-scale feature extraction to achieve better global modeling capability. In addition, we propose the Neck module with path aggregation network (PAN), which is designed with a two-feature pyramid structure capable of combining semantic and localization information in order to improve the operational performance by exploiting the underlying location features. A lightweight detection head is then developed using group convolution, fusing the two localization branches and removes the additional decoupling operation. Finally, we conduct comparative experiments on three datasets: the Retail-cabinet dataset, the Roadsign dataset, and the Pascal VOC dataset. Experimental results show that compared with the baseline model, our model achieves an 11.8% improvement in mAP on the Retail-cabinet dataset while reducing Params and FLOPs by 23.19% and 71.50%, respectively. The proposed model effectively reduces the model’s computational complexity and improves detection performance, thereby possessing high practical value. This code is released on https://github.com/ydlam/ST-YOLOX.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data availability

The dataset can be downloaded from these link: https://aistudio.baidu.com/aistudio/datasetdetail/91732; https://www.kaggle.com/andrewmvd/road-sign-detection; http://host.robots.ox.ac.uk/pascal/VOC/.

References

  1. Liu T, Wang S, Liu Y et al (2022) A lightweight neural network framework using linear grouped convolution for human activity recognition on mobile devices. J Supercomput 78:6696–6716

    Article  Google Scholar 

  2. Ali K, Liu AX, Chai E et al (2020) Monitoring browsing behavior of customers in retail stores via rfid imaging. IEEE Trans Mob Comput 21(3):1034–1048

    Article  Google Scholar 

  3. Allegra D, Litrico M, Spatafora MAN, et al (2021) Exploiting egocentric vision on shopping cart for out-of-stock detection in retail environments. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 1735–1740

  4. Wei H, Zhang Q, Qian Y et al (2023) Mtsdet: multi-scale traffic sign detection with attention and path aggregation. Appl Intell 53(1):238–250

    Article  Google Scholar 

  5. Dang TP, Tran NT, To VH et al (2023) Improved yolov5 for real-time traffic signs recognition in bad weather conditions. J Supercomput 79:10706–10724

    Article  Google Scholar 

  6. Fang W, Zhang K (2020) Real-time object detection of retail products for eye tracking. In: 2020 8th International Conference on Orange Technology (ICOT), IEEE, pp 1–4

  7. Talib MA, Majzoub S, Nasir Q et al (2021) A systematic literature review on hardware implementation of artificial intelligence algorithms. J Supercomput 77:1897–1938

    Article  Google Scholar 

  8. Liu Z, Lin Y, Cao Y, et al (2021) Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 10012–10022

  9. Liu S, Qi L, Qin H, et al (2018a) Path aggregation network for instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 8759–8768

  10. Le Y, Nanehkaran YA, Mwakapesa DS et al (2022) FP-DCNN: a parallel optimization algorithm for deep convolutional neural network. J Supercomput 78(3):3791–3813

    Article  Google Scholar 

  11. Wei H, Zhang Q, Han J et al (2022) Sarnet: spatial attention residual network for pedestrian and vehicle detection in large scenes. Appl Intell 52(15):17718–17733

    Article  Google Scholar 

  12. Hinton GE, Salakhutdinov RR (2006) Reducing the dimensionality of data with neural networks. science 313(5786):504–507

    Article  MathSciNet  Google Scholar 

  13. Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84–90

    Article  Google Scholar 

  14. He K, Zhang X, Ren S, et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 770–778

  15. Girshick R, Donahue J, Darrell T, et al (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 580–587

  16. He K, Zhang X, Ren S et al (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37(9):1904–1916

    Article  Google Scholar 

  17. Girshick R (2015) Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp 1440–1448

  18. Ren S, He K, Girshick R, et al (2015) Faster R-CNN: Towards real-time object detection with region proposal networks. Adv Neural Inform Process Syst 28

  19. He K, Gkioxari G, Dollár P, et al (2017) Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp 2961–2969

  20. Cai Z, Vasconcelos N (2018) Cascade R-CNN: delving into high quality object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 6154–6162

  21. Liu S, Qi L, Qin H, et al (2018b) Path aggregation network for instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 8759–8768

  22. Zhang H, Chang H, Ma B, et al (2020) Dynamic R-CNN: Towards high quality object detection via dynamic training. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XV 16, Springer, pp 260–275

  23. Li Z, Wang F, Wang N (2021) Lidar R-CNN: An efficient and universal 3d object detector. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7546–7555

  24. Sun P, Zhang R, Jiang Y, et al (2021) Sparse R-CNN: End-to-end object detection with learnable proposals. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 14454–14463

  25. Redmon J, Divvala S, Girshick R, et al (2016) You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 779–788

  26. Redmon J, Farhadi A (2017) Yolo9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 7263–7271

  27. Redmon J, Farhadi A (2018) Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767

  28. Bochkovskiy A, Wang CY, Liao HYM (2020) Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934

  29. Chuyi L, Lulu L, Hongliang J, et al (2022) YOLOv6: a single-stage object detection framework for industrial applications. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

  30. Liu W, Anguelov D, Erhan D, et al (2016) SSD: Single shot multibox detector. In: Computer vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14, Springer, pp 21–37

  31. Fu CY, Liu W, Ranga A, et al (2017) DSSD: deconvolutional single shot detector. arXiv preprint arXiv:1701.06659

  32. Maktab Dar Oghaz M, Razaak M, Remagnino P (2022) Enhanced single shot small object detector for aerial imagery using super-resolution, feature fusion and deconvolution. Sensors 22(12):4339

    Article  Google Scholar 

  33. Ge Z, Liu S, Wang F, et al (2021) Yolox: exceeding yolo series in 2021. arXiv preprint arXiv:2107.08430

  34. Chen Q, Wang Y, Yang T, et al (2021) You only look one-level feature. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 13039–13048

  35. Qu Y, Wan B, Wang C et al (2023) Optimization algorithm for steel surface defect detection based on PP-YOLOE. Electronics 12(19):4161

    Article  Google Scholar 

  36. Vaswani A, Shazeer N, Parmar N et al (2022) Attention is all you need. Adv Neural Inform Process Syst 2017:30

    Google Scholar 

  37. Carion N, Massa F, Synnaeve G et al (2020) End-to-end object detection with transformers. In: Proceedings of the European Conference on Computer Vision, Glasgow, UK 2020, pp 213–229

  38. Meng D, Chen X, Fan Z et al (2021) Conditional detr for fast training convergence. In: Proceedings of the IEEE International Conference on Computer Vision, Montreal, Canada, pp 3651–3660

  39. Gao P, Zheng M, Wang X et al (2022) Fast convergence of detr with spatially modulated coattention. In: Proceedings of the IEEE International Conference on Computer Vision, Montreal, Canada pp 3621–3630

  40. Wang Z, Jiacheng Z, Zhicheng Z, Fei S (2020) Efficient Yolo: A lightweight model for embedded deep learning object detection. In: 2020 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), pp 1-6. IEEE

  41. Tang Q, Jie L, Zhiping S, Yu H (2020) Lightdet: a lightweight and accurate object detection network. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 2243-2247. IEEE

Download references

Funding

This work was supported in part by the National Science Foundation of China under Grant 62266043 and U1803261, in part by National Science and Technology Major Project under Grant 95-Y50G34-9001-22/23, and in part by the Autonomous Region Science and Technology Department International Cooperation Project under Grant 2020E01023.

Author information

Authors and Affiliations

Authors

Contributions

JH contributed to conceptualization and data analysis; GY contributed to verification and writing; HW reviewed and edited; WG reviewed and edited; YQ contributed to resources and supervision. All authors have read and agreed to the version of the manuscript.

Corresponding author

Correspondence to Yurong Qian.

Ethics declarations

Conflict of interest

The authors have no competing interests to declare that are relevant to the content of this article.

Ethical approval

This study does not involve humans or animals subjects.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, J., Yang, G., Wei, H. et al. ST-YOLOX: a lightweight and accurate object detection network based on Swin Transformer. J Supercomput 80, 8038–8059 (2024). https://doi.org/10.1007/s11227-023-05744-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-023-05744-9

Keywords

Navigation