Abstract
Feature correspondences identification between consecutive frames is a critical prerequisite in the monocular Visual-Inertial Navigation System (VINS). In this paper, we propose a novel self-adaptation feature point correspondences identification algorithm in terms of IMU-aided information fusion at the level of feature tracking for nonlinear optimization framework-based VINS. This method starts with an IMU pre-integration predictor to predict the pose of each new coming frame. In weak texture scenes and motion blur situations, in order to increase the number of feature correspondences and improve the track lengths of feature points, we introduce a novel predicting-matching based feature point tracking strategy to build new matches. On the other hand, the predicted pose is incorporated into the outliers rejection step to deal with mismatch caused by dynamic objects. Finally, the proposed self-adaptation feature correspondences identification algorithm is implemented based on VINS-Fusion and validated through public datasets. The experimental results show that it effectively improves the accuracy and tracking length of feature matching, and demonstrates better performance in terms of camera pose estimation as compared to state-of-the-art approaches.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data Availability
The datasets for conducting the experiments are publicly available at https://www.cvlibs.net/datasets/kitti/index.php, and https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets
References
Wu KJ, Guo CX, Georgiou G, Roumeliotis SI (2017) Vins on wheels. In: 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp 5155–5162
Shen S, Mulgaonkar Y, Michael N, Kumar V (2016) Initialization-free monocular visual-inertial state estimation with application to autonomous mavs. In: Experimental robotics. Springer, pp 211–227
Shen S (2014) Autonomous navigation in complex indoor and outdoor environments with micro aerial vehicles. University of Pennsylvania
Delmerico J, Scaramuzza D (2018) A benchmark comparison of monocular visual-inertial odometry algorithms for flying robots. In: 2018 IEEE international conference on robotics and automation (ICRA). IEEE, pp 2502–2509
Klein G, Murray D (2007) Parallel tracking and mapping for small ar workspaces. In: 2007 6th IEEE and ACM international symposium on mixed and augmented reality. IEEE, pp 225–234
FaceBook. Oculus vr. https://www.oculus.com/
Apple. Apple arkit. https://developer.apple.com/arkit/
Google. Google arcore. https://developers.google.com/ar/
Bresson G, Alsayed Z, Yu L, Glaser S (2017) Simultaneous localization and mapping: A survey of current trends in autonomous driving. IEEE Trans Intell Vehic 2(3):194–220
Chatfield AB (1997) Fundamentals of high accuracy inertial navigation, vol. 174. Aiaa
Ahmad N, Ghazilla RAR, Khairi NM, Kasi V (2013) Reviews on various inertial measurement unit (imu) sensor applications. Int J Sig Process Syst 1(2):256–262
Wu Y, Tang F, Li H (2018) Image-based camera localization: an overview. Vis Comput Ind Biomed Art 1(1):1–13
Huang G (2019) Visual-inertial navigation: a concise review. In: 2019 international conference on robotics and automation (ICRA). IEEE, pp 9572–9582
Mourikis AI, Roumeliotis SI, et al (2007) A multi-state constraint kalman filter for vision-aided inertial navigation. In: ICRA, vol. 2. p 6
Leutenegger S, Lynen S, Bosse M, Siegwart R, Furgale P (2015) Keyframe-based visual-inertial odometry using nonlinear optimization. Int J Robot Res 34(3):314–334
Eckenhoff K, Geneva P, Huang G (2019) Closed-form preintegration methods for graph-based visual-inertial navigation. Int J Robot Res 38(5):563–586
Huang G (2013) Improving the consistency of nonlinear estimators: Analysis, algorithms, and applications. University of Minnesota
Qin T, Li P, Shen S (2018) Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans Rob 34(4):1004–1020
DG Lowe (1999) Object recognition from local scale-invariant features. In: Proceedings of the seventh IEEE international conference on computer vision, vol. 2. Ieee, pp 1150–1157
Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vision 60(2):91–110
Bay H, Tuytelaars T, Gool LV (2006) Surf: Speeded up robust features. In: European conference on computer vision. Springer, pp 404–417
Fischler MA, Bolles RC (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 24(6):381–395
Scaramuzza D, Censi A, Daniilidis K (2011) Exploiting motion priors in visual odometry for vehicle-mounted cameras with non-holonomic constraints. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, pp 4469–4476
Scaramuzza D (2011) 1-point-ransac structure from motion for vehicle-mounted cameras by exploiting non-holonomic constraints. Int J Comput Vision 95(1):74–85
Wu K, Ahmed A, Georgiou GA, Roumeliotis SI (2015) A square root inverse filter for efficient vision-aided inertial navigation on mobile devices. In: Robotics: Science and Systems, vol.2. Rome, Italy, p 2
Huang G, Eckenhoff K, Leonard J (2018) Optimal-state-constraint ekf for visual-inertial navigation. In: Robotics Research. Springer, pp 125–139
Maley JM, Eckenhoff K, Huang G (2017) Generalized optimal-state-constraint extended kalman filter (osc-ekf). Technical report, US Army Research Laboratory Aberdeen Proving Ground United States
Heo S, Jung JH, Park CG (2018) Consistent ekf-based visual-inertial navigation using points and lines. IEEE Sens J 18(18):7638–7649
Yang Y, Geneva P, Zuo X, Eckenhoff K, Liu Y, Huang G (2019) Tightly-coupled aided inertial navigation with point and plane features. In: 2019 International Conference on Robotics and Automation (ICRA). pp 6094–6100. IEEE, 2019
Mur-Artal R, Tardós JD (2017) Visual-inertial monocular slam with map reuse. IEEE Robot Auto Lett 2(2):796–803
Geneva P, Eckenhoff K, Huang G (2019) A linear-complexity ekf for visual-inertial navigation with loop closures. In: 2019 International Conference on Robotics and Automation (ICRA). IEEE, pp 3535–3541
Han L, Lin Y, Du G, Lian S (2019) Deepvio: Self-supervised deep learning of monocular visual inertial odometry using 3d geometric constraints. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp 6906–6913
Chen C, Rosa S, Miao Y, Lu CX, Wu W, Markham A, Trigoni N (2019) Selective sensor fusion for neural visual-inertial odometry. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 10542–10551
Silva do Monte Lima JP, Uchiyama H, Taniguchi R-I (2019) End-to-end learning framework for imu-based 6-dof odometry. Sensors 19(17):3777
Barfoot TD (2017) State estimation for robotics. Cambridge University Press
Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press
Nistér D (2004) An efficient solution to the five-point relative pose problem. IEEE Trans Pattern Anal Mach Intell 26(6):756–770
Torr PHS, Zisserman A (2000) Mlesac: A new robust estimator with application to estimating image geometry. Comput Vis Image Underst 78(1):138–156
Chum O, Matas J, Kittler J (2003) Locally optimized ransac. In: Joint Pattern Recognition Symposium. Springer, pp 236–243
Chum O, Matas J (2005) Matching with prosac-progressive sample consensus. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), vol. 1. IEEE, pp 220–226
Raguram R, Chum O, Pollefeys M, Matas J, Frahm J-M (2012) Usac: A universal framework for random sample consensus. IEEE Trans Pattern Anal Mach Intell 35(8):2022–2038
Haralick BM, Lee C-N, Ottenberg K, Nölle M (1994) Review and analysis of solutions of the three point perspective pose estimation problem. Int J Comput Vision 13(3):331–356
Ma J, Jiang X, Jiang J, Zhao J, Guo X (2019) Lmr: Learning a two-class classifier for mismatch removal. IEEE Trans Image Process 28(8):4045–4059
Ma J, Zhao J, Jiang J, Zhou H, Guo X (2019) Locality preserving matching. Int J Comput Vision 127(5):512–531
Ma J, Zhao J, Tian J, Yuille AL, Tu Z (2014) Robust point matching via vector field consensus. IEEE Trans Image Process 23(4):1706–1721
Bian JW, Lin W-Y, Matsushita Y, Yeung S-K, Nguyen T-D, Cheng M-M (2017) Gms: Grid-based motion statistics for fast, ultra-robust feature correspondence. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 4181–4190
Lucas BD, Kanade T (1981) An iterative image registration technique with an application to stereo vision. In: Proceedings of the 7th international joint conference on Artificial intelligence-Volume 2. pp 674–679
Ros melodic morenia
Geiger A, Lenz P, Urtasun R (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In: 2012 IEEE conference on computer vision and pattern recognition. IEEE, pp 3354–3361
Burri M, Nikolic J, Gohl P, Schneider T, Rehder J, Omari S, Achtelik MW, Siegwart R (2016) The euroc micro aerial vehicle datasets. Int J Robot Res 35(10):1157–1163
Guizilini V, Ambrus R, Pillai S, Raventos A, Gaidon A (2020) 3d packing for self-supervised monocular depth estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 2485–2494
Heo S, Park CG (2018) Consistent ekf-based visual-inertial odometry on matrix lie group. IEEE Sens J 18(9):3780–3788
Author information
Authors and Affiliations
Contributions
Zhelin Yu contributed totally to this work, and declared no potential conflicts of interest with respect to the research, authorship, and publication of this article.
Corresponding author
Ethics declarations
Ethical Approval
Ethical approval and informed consent were obtained in each original study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yu, Z. A self-adaptation feature correspondences identification algorithm in terms of IMU-aided information fusion for VINS. Appl Intell 55, 202 (2025). https://doi.org/10.1007/s10489-024-06120-7
Accepted:
Published:
DOI: https://doi.org/10.1007/s10489-024-06120-7