Skip to main content

Towards High-Speed Localisation for Autonomous Drone Racing

  • Conference paper
  • First Online:
Book cover Advances in Soft Computing (MICAI 2019)

Abstract

The ability to know the pose of a drone in a race track is a challenging task in Autonomous Drone Racing. However, to estimate the pose in real-time and at high-speed could be fundamental to lead an agile flight aiming to beat a human in a drone race. In this work, we present the architecture of a CNN to automatically estimates the drone’s pose relative to a gate in a race track. Due to the challenge in ADR, various proposals have been developed to address the problem of autonomous navigation, including those works where a global localisation approach has been used. Despite there are well-known solutions for global localisation such as visual odometry or visual SLAM, these methods may become expensive to be computed onboard. Motivated by the latter, we propose a CNN architecture based on the Posenet network, a work-oriented to perform camera relocalisation in real-time. Our contribution relies on the fact that we have modified and re-trained the Posenet network to adapt it to the context of relative localisation w.r.t. a gate in the track. The ultimate goal is to use our proposed localisation approach to tackle the autonomous navigation problem. We report a maximum speed of up to 100 fps in a low budget computer. Furthermore, seeking to test our approach in realistic scenarios, we have carried out experiments with small gates of 1 m of diameter under different light conditions.

Department of Computer Science at INAOE.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bang, J., Lee, D., Kim, Y., Lee, H.: Camera pose estimation using optical flow and ORB descriptor in SLAM-based mobile AR game. In: 2017 International Conference on Platform Technology and Service (PlatCon), pp. 1–4, February 2017. https://doi.org/10.1109/PlatCon.2017.7883693

  2. Camposeco, F., Cohen, A., Pollefeys, M., Sattler, T.: Hybrid camera pose estimation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018

    Google Scholar 

  3. Casarrubias-Vargas, H., Petrilli-Barcelo, A., Bayro-Corrochano, E.: EKF-SLAM and machine learning techniques for visual robot navigation. In: 2010 20th International Conference on Pattern Recognition, pp. 396–399, August 2010

    Google Scholar 

  4. Costante, G., Ciarfuglia, T.A.: LS-VO: learning dense optical subspace for robust visual odometry estimation. IEEE Robot. Autom. Lett. 3(3), 1735–1742 (2018). https://doi.org/10.1109/LRA.2018.2803211

    Article  Google Scholar 

  5. DeTone, D., Malisiewicz, T., Rabinovich, A.: Toward geometric deep SLAM. CoRR abs/1707.07410 (2017)

    Google Scholar 

  6. Do, T.T., Cai, M., Pham, T., Reid, I.: Deep-6DPose: recovering 6D object pose from a single RGB image (2018)

    Google Scholar 

  7. Fanani, N., Stürck, A., Ochs, M., Bradler, H., Mester, R.: Predictive monocular odometry (PMO): what is possible without RANSAC and multiframe bundle adjustment? Image Vis. Comput. 68, 3–13 (2017)

    Article  Google Scholar 

  8. Graves, A., Lim, S., Fagan, T., et al.: Visual odometry using convolutional neural networks. Kennesaw J. Undergrad. Res. 5(3), 5 (2017)

    Google Scholar 

  9. Kaufmann, E., et al.: Beauty and the beast: optimal methods meet learning for drone racing. CoRR abs/1810.06224 (2018)

    Google Scholar 

  10. Kaufmann, E., Loquercio, A., Ranftl, R., Dosovitskiy, A., Koltun, V., Scaramuzza, D.: Deep drone racing: learning agile flight in dynamic environments. CoRR abs/1806.08548 (2018)

    Google Scholar 

  11. Kehl, W., Manhardt, F., Tombari, F., Ilic, S., Navab, N.: SSD-6D: making RGB-based 3D detection and 6D pose estimation great again. In: The IEEE International Conference on Computer Vision (ICCV), October 2017

    Google Scholar 

  12. Kendall, A., Cipolla, R.: Modelling uncertainty in deep learning for camera relocalization. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 4762–4769, May 2016. https://doi.org/10.1109/ICRA.2016.7487679

  13. Kendall, A., Grimes, M., Cipolla, R.: PoseNet: a convolutional network for real-time 6-DOF camera relocalization. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 2938–2946, December 2015. https://doi.org/10.1109/ICCV.2015.336

  14. Li, S., van der Horst, E., Duernay, P., De Wagter, C., de Croon, G.C.: Visual model-predictive localization for computationally efficient autonomous racing of a 72-gram drone. arXiv preprint arXiv:1905.10110 (2019)

  15. Mansur, S., Habib, M., Pratama, G.N.P., Cahyadi, A.I., Ardiyanto, I.: Real time monocular visual odometry using optical flow: study on navigation of quadrotors UAV. In: 2017 3rd International Conference on Science and Technology - Computer (ICST), pp. 122–126, July 2017. https://doi.org/10.1109/ICSTC.2017.8011864

  16. Moon, H., et al.: Challenges and implemented technologies used in autonomous drone racing. Intel. Serv. Robot. 12(2), 137–148 (2019)

    Article  Google Scholar 

  17. More, V., Kumar, H., Kaingade, S., Gaidhani, P., Gupta, N.: Visual odometry using optic flow for unmanned aerial vehicles. In: 2015 International Conference on Cognitive Computing and Information Processing (CCIP), pp. 1–6, March 2015

    Google Scholar 

  18. Muller, P., Savakis, A.: Flowdometry: an optical flow and deep learning based approach to visual odometry. In: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 624–631, March 2017. https://doi.org/10.1109/WACV.2017.75

  19. Poirson, P., Ammirato, P., Fu, C.Y., Liu, W., Kosecka, J., Berg, A.C.: Fast single shot detection and pose estimation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 676–684. IEEE (2016)

    Google Scholar 

  20. Shalnov, E., Konushin, A.: Convolutional neural network for camera pose estimation from object detections. Int. Arch. Photogram. Remote Sens. Spat. Inf. Sci. 42 (2017)

    Google Scholar 

  21. Szegedy, C., et al.: Going deeper with convolutions. In: Computer Vision and Pattern Recognition (CVPR) (2015). http://arxiv.org/abs/1409.4842

  22. Tateno, K., Tombari, F., Laina, I., Navab, N.: CNN-SLAM: real-time dense monocular slam with learned depth prediction. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6565–6574, July 2017

    Google Scholar 

  23. Wang, S., Clark, R., Wen, H., Trigoni, N.: DeepVO: towards end-to-end visual odometry with deep recurrent convolutional neural networks. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 2043–2050, May 2017

    Google Scholar 

  24. Wu, Y., Liu, Y., Li, X.: Position estimation of camera based on unsupervised learning. In: Proceedings of the International Conference on Pattern Recognition and Artificial Intelligence, PRAI 2018, pp. 30–35. ACM (2018)

    Google Scholar 

  25. Yin, Z., Shi, J.: GeoNet: unsupervised learning of dense depth, optical flow and camera pose. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to José Arturo Cocoma-Ortega .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cocoma-Ortega, J.A., Martínez-Carranza, J. (2019). Towards High-Speed Localisation for Autonomous Drone Racing. In: Martínez-Villaseñor, L., Batyrshin, I., Marín-Hernández, A. (eds) Advances in Soft Computing. MICAI 2019. Lecture Notes in Computer Science(), vol 11835. Springer, Cham. https://doi.org/10.1007/978-3-030-33749-0_59

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-33749-0_59

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-33748-3

  • Online ISBN: 978-3-030-33749-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics