Abstract
Planes commonly exist in a human-made scene and are useful for robust localization. In this paper, we propose a novel monocular visual-inertial odometry system which leverages multi-plane priors. A novel visual-inertial-plane PnP algorithm is introduced to use plane information for fast localization. The planes are expanded via a reprojection consensus-based way, which is robust to depth estimation error. A novel structureless plane-distance cost is used in sliding-window optimization, which allows to use a small size window while maintaining good accuracy. Together with modified marginalization and sliding window strategy, the computational cost is significantly reduced. Our VIO system is tested on various datasets and compared with several state-of-the-art systems. Our system can achieve very competitive accuracy, and work pretty well on long and challenging sequences. Our system is also very efficient and can perform 30 fps averagely on an iPhone 7 mobile phone with a single thread.
This work was partially supported by NSF of China (Nos. 61822310 and 61672457), and the Fundamental Research Funds for the Central Universities (No. 2018FZA5011).
This work was done while Jinyu Li was a PhD student at Zhejiang University.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Agarwal, S., Mierle, K., Others: ceres solver. http://ceres-solver.org
Burri, M., et al.: The EuRoC micro aerial vehicle datasets. Int. J. Rob. Res. 35(10), 1157–1163 (2016)
Civera, J., Davison, A., Montiel, J.: Inverse depth parametrization for monocular SLAM. IEEE Trans. Rob. 24(5), 932–945 (2008)
Engel, J., Koltun, V., Cremers, D.: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2018)
Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: large-scale direct monocular SLAM. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8690, pp. 834–849. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10605-2_54
Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)
Forster, C., Carlone, L., Dellaert, F., Scaramuzza, D.: On-manifold preintegration for real-time visual-inertial odometry. IEEE Trans. Rob. 33(1), 1–21 (2017)
Forster, C., Zhang, Z., Gassner, M., Werlberger, M., Scaramuzza, D.: SVO: semidirect visual odometry for monocular and multicamera systems. IEEE Trans. Rob. 33(2), 249–265 (2017)
Lee, G.H., Fraundorfer, F., Pollefeys, M.: MAV visual SLAM with plane constraint. In: IEEE International Conference on Robotics and Automation, pp. 3139–3144. IEEE, Shanghai, May 2011
Shi, J.T.: Good features to track. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 593–600. IEEE Computer Society Press, Seattle (1994)
Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 1–10. IEEE, Nara, November 2007
Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R., Furgale, P.: Keyframe-based visual-inertial odometry using nonlinear optimization. Int. J. Rob. Res. 34(3), 314–334 (2015)
Li, P., Qin, T., Hu, B., Zhu, F., Shen, S.: Monocular visual-inertial state estimation for mobile augmented reality. In: IEEE International Symposium on Mixed and Augmented Reality, pp. 11–21. IEEE, Nantes, October 2017
Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the 7th International Joint Conference on Artificial Intelligence, IJCAI 1981 , vol. 2, pp. 674–679. Morgan Kaufmann Publishers Inc. (1981)
Mourikis, A.I., Roumeliotis, S.I.: A multi-state constraint kalman filter for vision-aided inertial navigation. In: IEEE International Conference on Robotics and Automation, pp. 3565–3572. IEEE, Rome, April 2007
Mur-Artal, R., Tardos, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Rob. 33(5), 1255–1262 (2017)
Pumarola, A., Vakhitov, A., Agudo, A., Sanfeliu, A., Moreno-Noguer, F.: PL-SLAM: Real-time monocular visual SLAM with points and lines. In: IEEE International Conference on Robotics and Automation, pp. 4503–4508. IEEE, Singapore, May 2017
Qin, T., Li, P., Shen, S.: VINS-Mono: a robust and versatile monocular visual-inertial state estimator. IEEE Trans. Rob. 34(4), 1004–1020 (2018)
Qin, T., Shen, S.: Robust initialization of monocular visual-inertial estimation on aerial robots. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4225–4232. IEEE, Vancouver, September 2017
Schubert, D., Goll, T., Demmel, N., Usenko, V., Stueckler, J., Cremers, D.: The TUM VI benchmark for evaluating visual-inertial odometry. In: International Conference on Intelligent Robots and Systems, October 2018
Yang, S., Song, Y., Kaess, M., Scherer, S.: Pop-up SLAM: Semantic monocular plane SLAM for low-texture environments. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1222–1229. IEEE, Daejeon,October 2016
Zhou, H., Zou, D., Pei, L., Ying, R., Liu, P., Yu, W.: StructSLAM: visual SLAM with building structure lines. IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015)
Zou, D., Wu, Y., Pei, L., Ling, H., Yu, W.: StructVIO : visual-inertial odometry with structural regularity of man-made environments. arXiv:1810.06796 [cs], October 2018
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Li, J., Yang, B., Huang, K., Zhang, G., Bao, H. (2019). Robust and Efficient Visual-Inertial Odometry with Multi-plane Priors. In: Lin, Z., et al. Pattern Recognition and Computer Vision. PRCV 2019. Lecture Notes in Computer Science(), vol 11859. Springer, Cham. https://doi.org/10.1007/978-3-030-31726-3_24
Download citation
DOI: https://doi.org/10.1007/978-3-030-31726-3_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-31725-6
Online ISBN: 978-3-030-31726-3
eBook Packages: Computer ScienceComputer Science (R0)