Skip to main content

End-to-end Neural Estimation of Spacecraft Pose with Intermediate Detection of Keypoints

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 Workshops (ECCV 2022)

Abstract

State-of-the-art methods for estimating the pose of spacecrafts in Earth-orbit images rely on a convolutional neural network either to directly regress the spacecraft’s 6D pose parameters, or to localize pre-defined keypoints that are then used to compute pose through a Perspective-n-Point solver. We study an alternative solution that uses a convolutional network to predict keypoint locations, which are in turn used by a second network to infer the spacecraft’s 6D pose. This formulation retains the performance advantages of keypoint-based methods, while affording end-to-end training and faster processing. Our paper is the first to evaluate the applicability of such a method to the space domain. On the SPEED dataset, our approach achieves a mean rotation error of \(4.69^\circ \) and a mean translation error of \(1.59\%\) with a throughput of 31 fps. We show that computational complexity can be reduced at the cost of a minor loss in accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The accuracy of our solution must still be measured on spaceborne images.

References

  1. Black, K., Shankar, S., Fonseka, D., Deutsch, J., Dhir, A., Akella, M.R.: Real-time, flight-ready, non-cooperative spacecraft pose estimation using monocular imagery. arXiv preprint arXiv:2101.09553 (2021)

  2. Carcagnì, P., Leo, M., Spagnolo, P., Mazzeo, P.L., Distante, C.: A lightweight model for satellite pose estimation. In: Sclaroff, S., Distante, C., Leo, M., Farinella, G.M., Tombari, F. (eds) Image Analysis and Processing. ICIAP 2022. LNCS, vol. 13231, pp. 3–14. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-06427-2_1

  3. Chen, B., Cao, J., Parra, A., Chin, T.J.: Satellite pose estimation with deep landmark regression and nonlinear pose refinement. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (2019)

    Google Scholar 

  4. Chen, B., Parra, A., Cao, J., Li, N., Chin, T.J.: End-to-end learnable geometric vision by backpropagating PNP optimization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8100–8109 (2020)

    Google Scholar 

  5. Cosmas, K., Kenichi, A.: Utilization of FPGA for onboard inference of landmark localization in CNN-based spacecraft pose estimation. Aerospace 7(11), 159 (2020)

    Article  Google Scholar 

  6. Dusmanu, M., Rocco, I., Pajdla, T., Pollefeys, M., Sivic, J., Torii, A., Sattler, T.: D2-net: a trainable CNN for joint description and detection of local features. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8092–8101 (2019)

    Google Scholar 

  7. Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (VOC) challenge. Int. J. Comput. Vision 88(2), 303–338 (2010)

    Article  Google Scholar 

  8. Furano, G., et al.: Towards the use of artificial intelligence on the edge in space systems: challenges and opportunities. IEEE Aerosp. Electron. Syst. Mag. 35(12), 44–56 (2020). https://doi.org/10.1109/MAES.2020.3008468

    Article  Google Scholar 

  9. Garcia, A., et al.: LSPNet: a 2d localization-oriented spacecraft pose estimation neural network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2048–2056 (2021)

    Google Scholar 

  10. Gerard, K.: Segmentation-driven satellite pose estimation. Kelvins Day Presentation (2019). https://indico.esa.int/event/319/attachments/3561/4754/pose_gerard_segmentation.pdf

  11. Goodwill, J., et al.: Nasa spacecube edge TPU smallsat card for autonomous operations and onboard science-data analysis. In: Proceedings of the Small Satellite Conference. No. SSC21-VII-08, AIAA (2021)

    Google Scholar 

  12. Hinterstoisser, S., Lepetit, V., Ilic, S., Holzer, S., Bradski, G., Konolige, K., Navab, N.: Model based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes. In: Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z. (eds.) ACCV 2012. LNCS, vol. 7724, pp. 548–562. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-37331-2_42

    Chapter  Google Scholar 

  13. Howard, A., et al.: Searching for mobilenetv3. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1314–1324 (2019)

    Google Scholar 

  14. Hu, Y., Fua, P., Wang, W., Salzmann, M.: Single-stage 6d object pose estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2930–2939 (2020)

    Google Scholar 

  15. Hu, Y., Hugonot, J., Fua, P., Salzmann, M.: Segmentation-driven 6d object pose estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3385–3394 (2019)

    Google Scholar 

  16. Hu, Y., Speierer, S., Jakob, W., Fua, P., Salzmann, M.: Wide-depth-range 6d object pose estimation in space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15870–15879 (2021)

    Google Scholar 

  17. Kisantal, M., Sharma, S., Park, T.H., Izzo, D., Märtens, M., D’Amico, S.: Satellite pose estimation challenge: Dataset, competition design, and results. IEEE Trans. Aerosp. Electron. Syst. 56(5), 4083–4098 (2020)

    Article  Google Scholar 

  18. Li, Y., Wang, G., Ji, X., Xiang, Yu., Fox, D.: DeepIM: deep iterative matching for 6D pose estimation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11210, pp. 695–711. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01231-1_42

    Chapter  Google Scholar 

  19. Ma, N., Zhang, X., Zheng, H.-T., Sun, J.: ShuffleNet V2: practical guidelines for efficient CNN architecture design. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 122–138. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01264-9_8

    Chapter  Google Scholar 

  20. Park, T.H., Bosse, J., D’Amico, S.: Robotic testbed for rendezvous and optical navigation: Multi-source calibration and machine learning use cases. arXiv preprint arXiv:2108.05529 (2021)

  21. Park, T.H., Sharma, S., D’Amico, S.: Towards robust learning-based pose estimation of noncooperative spacecraft. In: 2019 AAS/AIAA Astrodynamics Specialist Conference (2019)

    Google Scholar 

  22. Proença, P.F., Gao, Y.: Deep learning for spacecraft pose estimation from photorealistic rendering. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 6007–6013. IEEE (2020)

    Google Scholar 

  23. Rathinam, A., Gao, Y.: On-orbit relative navigation near a known target using monocular vision and convolutional neural networks for pose estimation. In: i-SAIRAS (2020)

    Google Scholar 

  24. Redmon, J., Farhadi, A.: Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767 (2018)

  25. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  26. Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  27. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv 2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)

    Google Scholar 

  28. Sharma, S., Beierle, C., D’Amico, S.: Pose estimation for non-cooperative spacecraft rendezvous using convolutional neural networks. In: 2018 IEEE Aerospace Conference, pp. 1–12. IEEE (2018)

    Google Scholar 

  29. Sharma, S., D’Amico, S.: Pose estimation for non-cooperative rendezvous using neural networks. arXiv preprint arXiv:1906.09868 (2019)

  30. Sharma, S., Ventura, J., D’Amico, S.: Robust model-based monocular pose initialization for noncooperative spacecraft rendezvous. J. Spacecr. Rocket. 55(6), 1414–1429 (2018)

    Article  Google Scholar 

  31. Tan, M., Le, Q.: Efficientnet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114. PMLR (2019)

    Google Scholar 

  32. Xiang, Y., Schmidt, T., Narayanan, V., Fox, D.: PoseCNN: A convolutional neural network for 6d object pose estimation in cluttered scenes. arXiv preprint arXiv:1711.00199 (2017)

Download references

Acknowledgments

Special thanks go to Mikko Viitala and Jonathan Denies for the supervision of this work within Aerospacelab. The research was funded by Aerospacelab and the Walloon Region through the Win4Doc program. Christophe De Vleeschouwer is a Research Director of the Fonds de la Recherche Scientifique - FNRS. Computational resources have been provided by the supercomputing facilities of the Université catholique de Louvain (CISM/UCL) and the Consortium des Équipements de Calcul Intensif en Fédération Wallonie Bruxelles (CÉCI) funded by the Fond de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under convention 2.5020.11 and by the Walloon Region.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Antoine Legrand .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Legrand, A., Detry, R., De Vleeschouwer, C. (2023). End-to-end Neural Estimation of Spacecraft Pose with Intermediate Detection of Keypoints. In: Karlinsky, L., Michaeli, T., Nishino, K. (eds) Computer Vision – ECCV 2022 Workshops. ECCV 2022. Lecture Notes in Computer Science, vol 13801. Springer, Cham. https://doi.org/10.1007/978-3-031-25056-9_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-25056-9_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-25055-2

  • Online ISBN: 978-3-031-25056-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics