Abstract
We present a system that uses deep learning and visual SLAM for autonomous flight in indoor environments. In this spirit, we use a state-of-the-art CNN architecture to obtain depth estimates, on a frame-to-frame basis, of images obtained from the drone’s onboard camera, and use them in a visual SLAM system to obtain both camera pose estimates with a metric that is further passed to a PID controller, responsible for the autonomous flight. However, because depth estimation and visual SLAM system are computationally intensive tasks, the processing is carried out off-board on a ground control station that receives online imagery and inertial data transmitted by the drone via a WiFi channel during the flight mission. Further, the metric pose estimates are used by the PID controller that communicates back to the vehicle with the caveat that synchronisation issues may arise in between the frame reception and the pose estimation output, typically with the frame reception running at 30 Hz, and the pose estimation at 15 Hz. As a consequence, the controller may also exhibit a delay in the control loop, provoking a flight off-track the trajectory set by the way-points. To mitigate this, we implemented a stochastic filter that estimates velocity and acceleration of the vehicle to predict pose estimates in those frames where no pose estimate is available yet, and when available, to compensate for the communication delay. We have evaluated the use of this methodology for indoor autonomous flight with promising results.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Davison, A.J., Reid, I.D., Molton, N.D., Stasse, O.: MonoSLAM: real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29, 1052–1067 (2007)
Weiss, S., Scaramuzza, D., Siegwart, R.: Monocular-SLAM-based navigation for autonomous micro helicopters in GPS-denied environments. J. Field Robot. 28, 854–874 (2011)
Magree, D., Mooney, J.G., Johnson, E.N.: Monocular visual mapping for obstacle avoidance on UAVs. J. Intell. Robot. Syst. 74, 17–26 (2014)
Rojas-Perez, L.O., Martinez-Carranza, J.: Metric monocular SLAM and colour segmentation for multiple obstacle avoidance in autonomous flight. In: IEEE 4th RED-UAS (2017)
Saxena, A., Chung, S.H., Ng, A.Y.: Learning depth from single monocular images. In: NIPS, vol. 18, MIT Press (2005)
Laina, I., Rupprecht, C., Belagiannis, V., Tombari, F., Navab, N.: Deeper depth prediction with fully convolutional residual networks. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 239–248, IEEE (2016)
Tateno, K., Tombari, F., Laina, I., Navab, N.: CNN-SLAM: real-time dense monocular SLAM with learned depth prediction. arXiv preprint arXiv:1704.03489 (2017)
Konam, S.: Vision-based navigation and deep-learning explanation for autonomy, in Masters thesis, Robotics Institute, Carnegie Mellon University (2017)
Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31, 1147–1163 (2015)
Bi, Y., et al.: An MAV localization and mapping system based on dual realsense cameras. In: International Micro Air Vehicles, Conferences Competitions, National University of Singapore, Singapore (2016). Technical Report
Li, J., et al.: Real-time simultaneous localization and mapping for UAV: a survey. In: International Micro Air Vehicle Conference and Competition (IMAV) (2010)
Bloesch, M., Omari, S., Hutter, M., Siegwart, R.: Robust visual inertial odometry using a direct EKF-based approach. In: IROS (2015)
Teixeira, L., Alzugaray, I., Chli, M.: Autonomous aerial inspection using visual-inertial robust localization and mapping. In: Hutter, M., Siegwart, R. (eds.) Field and Service Robotics. SPAR, vol. 5, pp. 191–204. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-67361-5_13
Lin, Y., et al.: Autonomous aerial navigation using monocular visual-inertial fusion. J. Field Robot. 35(1), 23–51 (2018)
Xu, W., Choi, D., Wang, G.: Direct visual-inertial odometry with semi-dense mapping. Comput. Electr. Eng. (2018)
Mu, X., Chen, J., Zhou, Z., Leng, Z., Fan, L.: Accurate initial state estimation in a monocular visual-inertial SLAM system. Sensors 18, 506 (2018)
Usenko, V., Engel, J., Stckler, J., Cremers, D.: Direct visual-inertial odometry with stereo cameras. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1885–1892 (2016)
Rebecq, H., Horstschaefer, T., Scaramuzza, D.: Real-time visualinertial odometry for event cameras using keyframe-based nonlinear optimization. In: British Machine Vision Conference (BMVC), vol. 3 (2017)
Vidal, A.R., Rebecq, H., Horstschaefer, T., Scaramuzza, D.: Hybrid, frame and event based visual inertial odometry for robust, autonomous navigation of quadrotors. arXiv preprint arXiv:1709.06310 (2017)
Mancini, M., Costante, G., Valigi, P., Ciarfuglia, T.A., Delmerico, J., Scaramuzza, D.: Toward domain independence for learning-based monocular depth estimation. IEEE Robot. Autom. Lett. 2, 1778–1785 (2017)
Wang, P., Shen, X., Lin, Z., Cohen, S., Price, B., Yuille, A.L.: Towards unified depth and semantic prediction from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2800–2809 (2015)
Chakrabarti, A., Shao, J., Shakhnarovich, G.: Depth from a single image by harmonizing overcomplete local network predictions. In: Advances in Neural Information Processing Systems, pp. 2658–2666 (2016)
Godard, C., Mac Aodha, O., Brostow, G.J.: Unsupervised monocular depth estimation with left-right consistency. In: CVPR, vol. 2, p. 7 (2017)
Chen, Z., Lam, O., Jacobson, A., Milford, M.: Convolutional neural network-based place recognition. arXiv preprint arXiv:1411.1509 (2014)
Kendall, A., Grimes, M., Cipolla, R.: PoseNeT: a convolutional network for real-time 6-DOF camera relocalization. In: IEEE International Conference on Computer Vision, pp. 2938–2946, IEEE (2015)
Li, R., Liu, Q., Gui, J., Gu, D., Hu, H.: Indoor relocalization in challenging environments with dual-stream convolutional neural networks. IEEE Trans. Autom. Sci. Eng. 15(2), 651–662 (2017)
Weerasekera, C.S., Garg, R., Reid, I.: Learning deeply supervised visual descriptors for dense monocular reconstruction. arXiv preprint arXiv:1711.05919 (2017)
Mukasa, T., Xu, J., Stenger, B.: 3D scene mesh from CNN depth predictions and sparse monocular SLAM. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 921–928 (2017)
Yang, S., Song, Y., Kaess, M., Scherer, S.: Pop-up SLAM: Semantic monocular plane SLAM for low-texture environments. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1222–1229. IEEE (2016)
Zhou, T., Brown, M., Snavely, N., Lowe, D.G.: Unsupervised learning of depth and ego-motion from video. In: CVPR, vol. 2, p. 7 (2017)
Gemeiner, P., Davison, A., Vincze, M.: Improving localization robustness in monocular SLAM using a high-speed camera. In: Robotics: Science and Systems (2008)
Acknowledgment
This work has also been partially funded by a CONACYT-INEGI fund with project no. 268528 and the Royal Society through the Newton Advanced Fellowship with reference NA140454.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Martinez-Carranza, J., Rojas-Perez, L.O., Cabrera-Ponce, A.A., Munguia-Silva, R. (2018). Combining Deep Learning and RGBD SLAM for Monocular Indoor Autonomous Flight. In: Batyrshin, I., Martínez-Villaseñor, M., Ponce Espinosa, H. (eds) Advances in Computational Intelligence. MICAI 2018. Lecture Notes in Computer Science(), vol 11289. Springer, Cham. https://doi.org/10.1007/978-3-030-04497-8_29
Download citation
DOI: https://doi.org/10.1007/978-3-030-04497-8_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-04496-1
Online ISBN: 978-3-030-04497-8
eBook Packages: Computer ScienceComputer Science (R0)