Skip to main content

Advertisement

A self-adaptation feature correspondences identification algorithm in terms of IMU-aided information fusion for VINS

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Feature correspondences identification between consecutive frames is a critical prerequisite in the monocular Visual-Inertial Navigation System (VINS). In this paper, we propose a novel self-adaptation feature point correspondences identification algorithm in terms of IMU-aided information fusion at the level of feature tracking for nonlinear optimization framework-based VINS. This method starts with an IMU pre-integration predictor to predict the pose of each new coming frame. In weak texture scenes and motion blur situations, in order to increase the number of feature correspondences and improve the track lengths of feature points, we introduce a novel predicting-matching based feature point tracking strategy to build new matches. On the other hand, the predicted pose is incorporated into the outliers rejection step to deal with mismatch caused by dynamic objects. Finally, the proposed self-adaptation feature correspondences identification algorithm is implemented based on VINS-Fusion and validated through public datasets. The experimental results show that it effectively improves the accuracy and tracking length of feature matching, and demonstrates better performance in terms of camera pose estimation as compared to state-of-the-art approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability

The datasets for conducting the experiments are publicly available at https://www.cvlibs.net/datasets/kitti/index.php, and https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets

Notes

  1. https://github.com/HKUST-Aerial-Robotics/VINS-Fusion

  2. https://github.com/MichaelGrupp/evo

References

  1. Wu KJ, Guo CX, Georgiou G, Roumeliotis SI (2017) Vins on wheels. In: 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp 5155–5162

  2. Shen S, Mulgaonkar Y, Michael N, Kumar V (2016) Initialization-free monocular visual-inertial state estimation with application to autonomous mavs. In: Experimental robotics. Springer, pp 211–227

  3. Shen S (2014) Autonomous navigation in complex indoor and outdoor environments with micro aerial vehicles. University of Pennsylvania

  4. Delmerico J, Scaramuzza D (2018) A benchmark comparison of monocular visual-inertial odometry algorithms for flying robots. In: 2018 IEEE international conference on robotics and automation (ICRA). IEEE, pp 2502–2509

  5. Klein G, Murray D (2007) Parallel tracking and mapping for small ar workspaces. In: 2007 6th IEEE and ACM international symposium on mixed and augmented reality. IEEE, pp 225–234

  6. FaceBook. Oculus vr. https://www.oculus.com/

  7. Apple. Apple arkit. https://developer.apple.com/arkit/

  8. Google. Google arcore. https://developers.google.com/ar/

  9. Bresson G, Alsayed Z, Yu L, Glaser S (2017) Simultaneous localization and mapping: A survey of current trends in autonomous driving. IEEE Trans Intell Vehic 2(3):194–220

    Article  Google Scholar 

  10. Chatfield AB (1997) Fundamentals of high accuracy inertial navigation, vol. 174. Aiaa

  11. Ahmad N, Ghazilla RAR, Khairi NM, Kasi V (2013) Reviews on various inertial measurement unit (imu) sensor applications. Int J Sig Process Syst 1(2):256–262

    Google Scholar 

  12. Wu Y, Tang F, Li H (2018) Image-based camera localization: an overview. Vis Comput Ind Biomed Art 1(1):1–13

    Article  MATH  Google Scholar 

  13. Huang G (2019) Visual-inertial navigation: a concise review. In: 2019 international conference on robotics and automation (ICRA). IEEE, pp 9572–9582

  14. Mourikis AI, Roumeliotis SI, et al (2007) A multi-state constraint kalman filter for vision-aided inertial navigation. In: ICRA, vol. 2. p 6

  15. Leutenegger S, Lynen S, Bosse M, Siegwart R, Furgale P (2015) Keyframe-based visual-inertial odometry using nonlinear optimization. Int J Robot Res 34(3):314–334

    Article  Google Scholar 

  16. Eckenhoff K, Geneva P, Huang G (2019) Closed-form preintegration methods for graph-based visual-inertial navigation. Int J Robot Res 38(5):563–586

    Article  MATH  Google Scholar 

  17. Huang G (2013) Improving the consistency of nonlinear estimators: Analysis, algorithms, and applications. University of Minnesota

  18. Qin T, Li P, Shen S (2018) Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans Rob 34(4):1004–1020

    Article  MATH  Google Scholar 

  19. DG Lowe (1999) Object recognition from local scale-invariant features. In: Proceedings of the seventh IEEE international conference on computer vision, vol. 2. Ieee, pp 1150–1157

  20. Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vision 60(2):91–110

    Article  MATH  Google Scholar 

  21. Bay H, Tuytelaars T, Gool LV (2006) Surf: Speeded up robust features. In: European conference on computer vision. Springer, pp 404–417

  22. Fischler MA, Bolles RC (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 24(6):381–395

    Article  MathSciNet  MATH  Google Scholar 

  23. Scaramuzza D, Censi A, Daniilidis K (2011) Exploiting motion priors in visual odometry for vehicle-mounted cameras with non-holonomic constraints. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, pp 4469–4476

  24. Scaramuzza D (2011) 1-point-ransac structure from motion for vehicle-mounted cameras by exploiting non-holonomic constraints. Int J Comput Vision 95(1):74–85

    Article  MATH  Google Scholar 

  25. Wu K, Ahmed A, Georgiou GA, Roumeliotis SI (2015) A square root inverse filter for efficient vision-aided inertial navigation on mobile devices. In: Robotics: Science and Systems, vol.2. Rome, Italy, p 2

  26. Huang G, Eckenhoff K, Leonard J (2018) Optimal-state-constraint ekf for visual-inertial navigation. In: Robotics Research. Springer, pp 125–139

  27. Maley JM, Eckenhoff K, Huang G (2017) Generalized optimal-state-constraint extended kalman filter (osc-ekf). Technical report, US Army Research Laboratory Aberdeen Proving Ground United States

  28. Heo S, Jung JH, Park CG (2018) Consistent ekf-based visual-inertial navigation using points and lines. IEEE Sens J 18(18):7638–7649

    Article  MATH  Google Scholar 

  29. Yang Y, Geneva P, Zuo X, Eckenhoff K, Liu Y, Huang G (2019) Tightly-coupled aided inertial navigation with point and plane features. In: 2019 International Conference on Robotics and Automation (ICRA). pp 6094–6100. IEEE, 2019

  30. Mur-Artal R, Tardós JD (2017) Visual-inertial monocular slam with map reuse. IEEE Robot Auto Lett 2(2):796–803

    Article  Google Scholar 

  31. Geneva P, Eckenhoff K, Huang G (2019) A linear-complexity ekf for visual-inertial navigation with loop closures. In: 2019 International Conference on Robotics and Automation (ICRA). IEEE, pp 3535–3541

  32. Han L, Lin Y, Du G, Lian S (2019) Deepvio: Self-supervised deep learning of monocular visual inertial odometry using 3d geometric constraints. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp 6906–6913

  33. Chen C, Rosa S, Miao Y, Lu CX, Wu W, Markham A, Trigoni N (2019) Selective sensor fusion for neural visual-inertial odometry. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 10542–10551

  34. Silva do Monte Lima JP, Uchiyama H, Taniguchi R-I (2019) End-to-end learning framework for imu-based 6-dof odometry. Sensors 19(17):3777

  35. Barfoot TD (2017) State estimation for robotics. Cambridge University Press

    Book  MATH  Google Scholar 

  36. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press

  37. Nistér D (2004) An efficient solution to the five-point relative pose problem. IEEE Trans Pattern Anal Mach Intell 26(6):756–770

    Article  MATH  Google Scholar 

  38. Torr PHS, Zisserman A (2000) Mlesac: A new robust estimator with application to estimating image geometry. Comput Vis Image Underst 78(1):138–156

    Article  MATH  Google Scholar 

  39. Chum O, Matas J, Kittler J (2003) Locally optimized ransac. In: Joint Pattern Recognition Symposium. Springer, pp 236–243

  40. Chum O, Matas J (2005) Matching with prosac-progressive sample consensus. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), vol. 1. IEEE, pp 220–226

  41. Raguram R, Chum O, Pollefeys M, Matas J, Frahm J-M (2012) Usac: A universal framework for random sample consensus. IEEE Trans Pattern Anal Mach Intell 35(8):2022–2038

    Article  Google Scholar 

  42. Haralick BM, Lee C-N, Ottenberg K, Nölle M (1994) Review and analysis of solutions of the three point perspective pose estimation problem. Int J Comput Vision 13(3):331–356

    Article  MATH  Google Scholar 

  43. Ma J, Jiang X, Jiang J, Zhao J, Guo X (2019) Lmr: Learning a two-class classifier for mismatch removal. IEEE Trans Image Process 28(8):4045–4059

    Article  MathSciNet  MATH  Google Scholar 

  44. Ma J, Zhao J, Jiang J, Zhou H, Guo X (2019) Locality preserving matching. Int J Comput Vision 127(5):512–531

    Article  MathSciNet  MATH  Google Scholar 

  45. Ma J, Zhao J, Tian J, Yuille AL, Tu Z (2014) Robust point matching via vector field consensus. IEEE Trans Image Process 23(4):1706–1721

    Article  MathSciNet  MATH  Google Scholar 

  46. Bian JW, Lin W-Y, Matsushita Y, Yeung S-K, Nguyen T-D, Cheng M-M (2017) Gms: Grid-based motion statistics for fast, ultra-robust feature correspondence. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 4181–4190

  47. Lucas BD, Kanade T (1981) An iterative image registration technique with an application to stereo vision. In: Proceedings of the 7th international joint conference on Artificial intelligence-Volume 2. pp 674–679

  48. Ros melodic morenia

  49. Geiger A, Lenz P, Urtasun R (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In: 2012 IEEE conference on computer vision and pattern recognition. IEEE, pp 3354–3361

  50. Burri M, Nikolic J, Gohl P, Schneider T, Rehder J, Omari S, Achtelik MW, Siegwart R (2016) The euroc micro aerial vehicle datasets. Int J Robot Res 35(10):1157–1163

    Article  Google Scholar 

  51. Guizilini V, Ambrus R, Pillai S, Raventos A, Gaidon A (2020) 3d packing for self-supervised monocular depth estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 2485–2494

  52. Heo S, Park CG (2018) Consistent ekf-based visual-inertial odometry on matrix lie group. IEEE Sens J 18(9):3780–3788

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Contributions

Zhelin Yu contributed totally to this work, and declared no potential conflicts of interest with respect to the research, authorship, and publication of this article.

Corresponding author

Correspondence to Zhelin Yu.

Ethics declarations

Ethical Approval

Ethical approval and informed consent were obtained in each original study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yu, Z. A self-adaptation feature correspondences identification algorithm in terms of IMU-aided information fusion for VINS. Appl Intell 55, 202 (2025). https://doi.org/10.1007/s10489-024-06120-7

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10489-024-06120-7

Keywords