Skip to main content

ADS-B-Based Spatial-Temporal Multi-scale Object Detection Network forĀ Airport Scenes

  • Conference paper
  • First Online:
Image and Graphics (ICIG 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14358))

Included in the following conference series:

  • 316 Accesses

Abstract

Aircraft detection is important for intelligent airport applications. This task is challenging due to some problems, e.g. the aircraft is usually small and the appearance varies dramatically with view angle. In this paper, we introduce the Automatic Dependent Surveillance-Broadcast (ADS-B) signal into the object detection framework. ADS-B is a kind of airport-specific data, which provides the aircraft location information in real-time. We use the ADS-B signal as prior information to guide aircraft detection. Firstly, from the spatial perspective, we construct an ADS-B-based saliency function, and use it to apply attention to certain spatial regions during feature extraction. Because the aircraft is likely to be in the area of attention, the detection accuracy can be improved, especially for small aircraft. Secondly, from the temporal perspective, we predict the motion direction of moving aircraft based on historical ADS-B data, and use it to generate real-time updated anchors. In addition, the shape and scale prior are also considered in the anchor generation process. The generated anchor is able to fit aircraft shape well, even in the case of drastic viewangle changes. Finally, experiments are conducted on the AGVS-T dataset to verify the effectiveness of the proposed method.

Supported by the National Natural Science Foundation of China under grants U1733111 and U19A2052, and partly by the Project of Quzhou Municipal Government (2022D034).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Liu, W., et al.: SSD: single shot multibox detector. In: European conference on computer vision, pp. 21ā€“37 (2016)

    Google ScholarĀ 

  2. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, vol. 28 (2015)

    Google ScholarĀ 

  3. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779ā€“788 (2016)

    Google ScholarĀ 

  4. Zhou, X., Wang, D., KrƤhenbĆ¼hl, P.: Objects as points. arXiv preprint arXiv:1904.07850 (2019)

  5. Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440ā€“1448 (2015)

    Google ScholarĀ 

  6. Purkait, P., Zhao, C., Zach, C.: SPP-Net: deep absolute pose regression with synthetic views. arXiv preprint arXiv:1712.03452 (2017)

  7. Lin, T.Y., DollĆ”r, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117ā€“2125 (2017)

    Google ScholarĀ 

  8. Liang, Z., Shao, J., Zhang, D., Gao, L.: Small object detection using deep feature pyramid networks. In: Pacific Rim Conference on Multimedia, pp. 554ā€“564 (2018)

    Google ScholarĀ 

  9. Pang, J., Chen, K., Shi, J., Feng, H., Ouyang, W., Lin, D.: Libra R-CNN: towards balanced learning for object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 821ā€“830 (2019)

    Google ScholarĀ 

  10. Liu, S., Qi, L., Qin, H., Shi, J., Jia, J.: Path aggregation network for instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8759ā€“8768 (2018)

    Google ScholarĀ 

  11. Liu, S., Huang, D., Wang, Y.: Learning spatial fusion for single-shot object detection. arXiv preprint arXiv:1911.09516 (2019)

  12. Qi, C.R., Liu, W., Wu, C., Su, H., Guibas, L.J.: Frustum PointNets for 3D object detection from RGB-D data. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 918ā€“927 (2018)

    Google ScholarĀ 

  13. Zhu, H., Zhang, C., Fang, Q., Cao, X., Liu, H.: RVNet: learning to reconstruct 3D visible objects with single-view depth and silhouette supervision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14129ā€“14138 (2020)

    Google ScholarĀ 

  14. Yang, J., Yang, H., Yan, J.: CRFNet: conditional random fields network for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1465ā€“1474 (2018)

    Google ScholarĀ 

  15. Wang, J., Chen, K., Yang, S., Loy, C.C., Lin, D.: Guiding anchors for image segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4844ā€“4853 (2019)

    Google ScholarĀ 

  16. http://www.agvs-caac.com/

  17. Lin, T.Y., Goyal, P., Girshick, R., He, K., DollĆ”r, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980ā€“2988 (2017)

    Google ScholarĀ 

  18. Chen, K., et al.: MMDetection: open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155 (2019)

  19. Liu, S., Zhang, X., Wang, C., Liu, X.: PANet: path aggregation network for instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8759ā€“8768 (2018)

    Google ScholarĀ 

  20. Ge, Z., Liu, S., Wang, F., Li, Z., Sun, J.: YOLOX: exceeding YOLO series in 2021. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10296ā€“10305 (2021)

    Google ScholarĀ 

  21. Tian, Z., Shen, C., Chen, H., He, T.: FCOS: fully convolutional one-stage object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9627ā€“9636 (2019)

    Google ScholarĀ 

  22. Tian, X., Shen, J., Chen, L.: FSAF: feature fusion single shot multibox detector. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12689ā€“12698 (2019)

    Google ScholarĀ 

  23. He, L., Todorovic, S.: DESTR: object detection with split transformer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9377ā€“9386 (2022)

    Google ScholarĀ 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiang Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

Ā© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jiang, L., Zhang, X., Liu, Y., Li, T. (2023). ADS-B-Based Spatial-Temporal Multi-scale Object Detection Network forĀ Airport Scenes. In: Lu, H., et al. Image and Graphics . ICIG 2023. Lecture Notes in Computer Science, vol 14358. Springer, Cham. https://doi.org/10.1007/978-3-031-46314-3_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-46314-3_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-46313-6

  • Online ISBN: 978-3-031-46314-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics