Skip to main content
Log in

SFRT-DETR:A SAR ship detection algorithm based on feature selection and multi-scale feature focus

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Currently, ship images acquired by synthetic aperture radar (SAR) are susceptible to complex marine environments and inconsistent ship sizes, which brings great challenges to lightweight, high accuracy, and real-time SAR ship detection. To address these issues, we propose the SFRT-DETR algorithm: a SAR ship detection algorithm based on feature selection and multi-scale feature focus. Firstly, the feature selection module is designed to screen the SAR ship image features through the attention mechanism, so this module can filter the redundant background feature information and improve the detection speed of the model. Then, a multi-scale Feature Focus (MFF) module is constructed, which uses parallel dilated convolution to capture and focus ship features at different scales. This module effectively improves the ability of the model to detect ships of large, medium, and small sizes. Finally, multi-path up-sampling and down-sampling modules are constructed, which can enhance more meaningful multi-scale ship features. The experimental results on the High Resolution SAR Images Dataset (HRSID) and SAR Ship Detection Dataset (SSDD) demonstrate that, in comparison with the baseline RT-DETR model, the Average Precision (AP) has been enhanced by 5.37% and 7.30%, respectively. The Frames Per Second (FPS) has been elevated to 77 frames/s, thereby achieving the goal of balancing high accuracy with real-time detection.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data availability

No datasets were generated or analysed during the current study.

References

  1. zhang, J., Li, S., Dong, Y., et al.: Hierarchical similarity alignment for domain adaptive ship detection in sar images. IEEE Transactions on Geoscience and Remote Sensing. 60, 1–11 (2022)

  2. Zhang, C., Liu, P., Wang, H., et al.: A review of recent advance of ship detection in single-channel sar images. Waves in Random and Complex Media. 33(5–6), 1442–1473 (2023)

    Article  MATH  Google Scholar 

  3. Zhang, J., Liu, Z., Jiang, W., et al.: Application of deep generative networks for sar/isar: a review. Artif. Intell. Rev. 50(10), 11905–11983 (2023)

    Article  MATH  Google Scholar 

  4. Ouchi, K., Yoshida, T.: On the interpretation of synthetic aperture radar images of oceanic phenomena: Past and present. Remote Sensing. 15(5), 1329 (2023)

    Article  MATH  Google Scholar 

  5. Robey, C.F., Fuhrmann, R.D., Kelly, J.E., et al.: A cfar adaptive matched filter detector. IEEE Trans. Aerosp. Electron. Syst. 28(1), 208–216 (1992)

    Article  Google Scholar 

  6. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature. 521(7553), 436–444 (2015)

    Article  MATH  Google Scholar 

  7. Liu, L., Ouyang, W., Wang, X., et al.: Deep learning for generic object detection. A Survey. (2018)

  8. Ying, Y. L., Liu, Zhang, Z., et al.: Multi-granularity-aware network for sar ship detection in complex backgrounds. IEEE Geoscience and Remote Sensing Letters. (2024)

  9. Xu, X., Zhang, X., Shao, Z., et al.: A group-wise feature enhancement-and-fusion network with dual-polarization feature enrichment for sar ship detection. Remote Sensing. 14(20), 5276 (2022)

    Article  MATH  Google Scholar 

  10. Xu, X., Zhang, X., Zeng, T., et al.: Group-wise feature fusion r-cnn for dual-polarization sar ship detection. 2023 IEEE Radar Conference (RadarConf23)., 1–5 (2023)

  11. Ren, S., He, K., Girshick, R., et al.: Faster r-cnn: Towards real-time object detection with region proposal networks.. Advances in neural information processing systems. 28 (2015)

  12. Luo, Y., Li, M., Wen, G., Y., T., Shi, C.: Ship-yolo: A lightweight synthetic aperture radar ship detection model based on yolov8n algorithm. IEEE Access. 12, 37030–37041 (2024)

  13. Han, K., Wang, Y., Tian, Q., et al.: Ghostnet: More features from cheap operations. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition., 1580–1589 (2024)

  14. Zhang, L.Q., Yang, B. Y: Sa-net: Shuffle attention for deep convolutional neural networks. ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE., 2235–2239 (2021)

  15. Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: Unified, real-time object detection. Proceedings of the IEEE conference on computer vision and pattern recognition., 779–788 (2016)

  16. Zhang, L., Liu, Y., Zhao, W., et al.: Frequency-adaptive learning for sar ship detection in clutter scenes. IEEE Transactions on Geoscience and Remote Sensing. (2023)

  17. Ren, X., Bai, Y., Liu, G., et al.: Yolo-lite: An efficient lightweight network for sar ship detection. Remote Sensing. 15(15), 3771 (2023)

    Article  MATH  Google Scholar 

  18. Chen, Z., Liu, C., Filaretov, F.V., et al.: Multi-scale ship detection algorithm based on yolov7 for complex scene sar images. Remote Sensing. 15(8), 2071 (2023)

    Article  MATH  Google Scholar 

  19. Dosovitskiy, A., Beyer, L., Kolesnikov, A., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv: 2010., 11929 (2020)

  20. Carion, N., Massa, F., Synnaeve, G., et al.: End-to-end object detection with transformers. European conference on computer vision. Cham: Springer International Publishing., 213–229 (2020)

  21. Zhu, X., Su, W., Lu, L., et al.: Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010., 04159 (2020)

  22. Li, F., Zhang, H., Liu, S., et al.: Dn-detr: Accelerate detr training by introducing query denoising. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition., 13619–13627 (2022)

  23. Yao, Z., Ai, J., Li, B., et al.: Efficient detr: improving end-to-end object detector with dense prior. arXiv preprint arXiv:2104.01318. (2021)

  24. Lv, W., Xu, S., Zhao, Y., et al.: Detrs beat yolos on real-time object detection. arXiv preprint arXiv:2304.08069. (2023)

  25. Zhang, L., Zheng, J., Li, C., Xu, Z., Yang, J., Wei, Q., Wu, X.: Ccdn-detr: A detection transformer based on constrained contrast denoising for multi-class synthetic aperture radar object detection. Sensors. 24(6), 1793 (2023)

    Article  Google Scholar 

  26. Li, Z., Liu, F., Yang, W., et al.: A survey of convolutional neural networks: analysis, applications, and prospects. IEEE transactions on neural networks and learning systems. 33(12), 6999–7019 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  27. Yao, T., Li, Y., Pan, Y., et al.: Hgnet: Learning hierarchical geometry from points, edges, and surfaces. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition., 21846–21855 (2023)

  28. Cai, X., Lai, Q., Wang, Y., et al.: Poly kernel inception network for remote sensing detection. arXiv preprint arXiv:2403.06258., 21846–21855 (2023)

  29. Lin, Y. T, Goyal, P., Girshick, R., et al.: Focal loss for dense object detection. Proceedings of the IEEE international conference on computer vision., 2980–2988 (2017)

  30. Hu, Q., Hu, S., Liu, S., et al.: Finet: A feature interaction network for sar ship object-level and pixel-level detection. IEEE Trans. Geosci. Remote Sens. 60, 1–15 (2022)

    MATH  Google Scholar 

  31. Feng, Y., You, Y., Tian, J., et al.: Oegr-detr: A novel detection transformer based on orientation enhancement and group relations for sar object detection. Remote Sensing. 16(1), 106 (2023)

    Article  MATH  Google Scholar 

  32. Zhang, L., Zhang, J., Li, C., et al.: Ccdn-detr: A detection transformer based on constrained contrast denoising for multi-class synthetic aperture radar object detection. Sensors. (6), 1793 (2024)

Download references

Author information

Authors and Affiliations

Authors

Contributions

Cao Jie:The author mainly contributed to the innovative proposal of the manuscript, the design and analysis of the experiment Han Penghui:The author mainly contributed to manuscript writing and analysis of experimental results Liang HaoPeng:The main contribution of the author is the drawing of each module diagram in the manuscript and the optimization of each module Niu Yu:The author’s main contributions were the acquisition and collection of data sets and the final proofreading of manuscripts.

Corresponding author

Correspondence to Cao Jie.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jie, C., Penghui, H., Haopeng, L. et al. SFRT-DETR:A SAR ship detection algorithm based on feature selection and multi-scale feature focus. SIViP 19, 115 (2025). https://doi.org/10.1007/s11760-024-03707-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11760-024-03707-y

Keywords

Navigation