Skip to main content
Log in

Vehicle Odometry with Camera-Lidar-IMU Information Fusion and Factor-Graph Optimization

  • Short Paper
  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

Formula Student Driverless (FSD) requires students to design and build a driverless vehicle to race on track, which incurs great demands on the odometry solution. High accuracy odometry plays a significant role in Simultaneous Localization and Mapping (SLAM) and automatic navigation mission. This paper proposes an odometry method based on Camera-Lidar-IMU information fusion and Factor-Graph optimization. It solves the problem of observation of speed and pose transformation in high speed racing scenes with sparse features. Firstly, a YoloV3-tiny object detector is used to identify cone objects captured through camera sensor, which is used to segment the object points from the Lidar pointcloud. Then, the object points are registered by utilizing the inertial measurement unit (IMU) pre-integration result as rough estimation, to obtain increment of pose transformation in horizontal plane. And a Ground Normal Vector Registration method is developed using ground points to solve increment of vertical pose transformation. These two transformation results are coupled to get a real-time odometry. At last, the odometry results and observations are optimized at the back-end with the Factor-Graph algorithm. Experiments show that the method presented in this paper performs well in real environment, and achieves high accuracy and provides a good reference for vehicle SLAM and navigation.

GraphIcal abstracts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Zeilinger, M., Hauk, R., Bader, M., Hofmann, A.: Design of an autonomous race car for the Formula Student Driverless (FSD)[C]. Oagm & Arw Joint Workshop (2017)

  2. Valls, M.I., Hendrikx, H.F.C., Reijgwart, V.J.F., Meier, F.V., Sa, I., Dube, R., Gawel, A., Burki, M.: Design of an autonomous racecar: Perception, state estimation and system integration[C]. Proc. IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 2048–2055. (2018)

  3. Kabzan, J., Valls, M.I., Reijgwart, V., et al.: AMZ driverless: the full autonomous racing system[J]. Journal of Field Robotics. 37(7), 1267–1294 (2020)

  4. Jun N I, Jibin H U.: Autonomous driving system design for formula student driverless racecar[C]. In: 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 1-6. IEEE (2018)

  5. Forster, C., Carlone, L., Dellaert, F., Scaramuzza, D.: On-manifold preintegration for real-time visual-inertial odometry[J]. IEEE Trans Robot. 33(1), 1–21 (2016)

  6. Forster, C., Carlone, L., Dellaert, F., Scaramuzza, D.: Supplementary material to: IMU preintegration on manifold for efficient visual-inertial maximum-a-posteriori estimation[R], Georgia Institute of Technology (2015)

  7. Tedaldi, D., Pretto, A.: Menegatti E.: A robust and easy to implement method for IMU calibration without external equipments[C]. In: Proc. IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 3042–3049. IEEE (2014)

  8. El-Sheimy, N., Hou, H., Niu, X.: Analysis and modeling of inertial sensors using allan variance[J]. IEEE Trans Instrum Meas. 57(1), 140–149 (2007)

  9. Chetverikov, D., Svirko, D., Stepanov, D., et al.: The trimmed iterative closest point algorithm[C]. In: Object recognition supported by user interaction for service robots, vol. 3, pp. 545–548. IEEE (2002)

  10. Dellaert, F.: Factor graphs and GTSAM: A hands-on introduction[R]. Georgia Institute of Technology (2012)

  11. Kaess, M., Johannsson, H., Roberts, R., et al.: iSAM2: Incremental smoothing and mapping using the Bayes tree[J]. Int J Robot Res. 31(2), 216–235 (2012)

  12. Quigley, M., Conley, K., Gerkey, B., et al.: ROS: an open-source Robot Operating System[C]. ICRA Workshop on Open Source Software. 3(3.2), 5 (2009)

  13. Rusu R B, Cousins S.: 3D is here: Point cloud library (PCL)[C]. In: 2011 IEEE Int. Conf. on Robotics and Automation, pp. 1–4. IEEE (2011)

  14. Mur-Artal, R., Tardós, J.D.: Orb-slam2: An open-source slam system for monocular, stereo, and RGB-D cameras[J]. IEEE Trans Robot. 33(5), 1255–1262 (2017)

  15. Castro, G., Nitsche, M.A., Pire, T., et al.: Efficient on-board Stereo SLAM through constrained-covisibility strategies[J]. Robot Auton Syst. 116, 192–205 (2019)

  16. Engel J, Koltun V, Cremers D. Direct sparse odometry[J]. In: IEEE Transactions on Pattern Analysis and Machine Intelligence. 40(3), 611-625 (2017)

  17. Forster C, Pizzoli M, Scaramuzza D. SVO: Fast semi-direct monocular visual odometry[C]. In: Proc. IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 15–22. IEEE (2014)

  18. Li, S., Zhang, T., Gao, X., Wang, D., Xian, Y.: Semi-direct monocular visual and visual-inertial SLAM with loop closure detection[J]. Robot Auton Syst. 112, 201–210 (2019)

  19. Qin, T., Li, P., Shen, S.: Vins-mono: A robust and versatile monocular visual-inertial state estimator[J]. IEEE Trans Robot. 34(4), 1004–1020 (2018)

  20. Nemec, D., Šimák, V., Janota, A., et al.: Precise localization of the mobile wheeled robot using sensor fusion of odometry, visual artificial landmarks and inertial sensors[J]. Robot Auton Syst. 112, 168–177 (2019)

  21. Lynen, S., Achtelik, M.W., Weiss, S., Chli, M., Siegwart, R.: A robust and modular multi-sensor fusion approach applied to mav navigation[C]. In: 2013 IEEE/RSJ Int. Conf. on Intelligent Robots and systems, pp. 3923–3929. IEEE (2013)

  22. Delmerico, J., Scaramuzza, D.: A benchmark comparison of monocular visual-inertial odometry algorithms for flying robots[C]. In: 2018 IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 2502–2509. IEEE (2018)

  23. Kato S, Tokunaga, S., Maruyama, Y.: Autoware on board: Enabling autonomous vehicles with embedded systems[C]. In: 2018 ACM/IEEE 9th Int. Conf. on Cyber-Physical Systems (ICCPS), pp. 287–296. IEEE (2018)

  24. Rehder J, Nikolic J, Schneider T, et al. Extending kalibr: Calibrating the extrinsics of multiple IMUs and of individual axes[C]. In: 2016 IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 4304–4311. IEEE (2016)

  25. Furgale, P., Rehder, J., Siegwart, R.: Unified temporal and spatial calibration for multi-sensor systems[C]. In: 2013 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 1280–1286. IEEE (2013)

  26. Yang, Z., Shen, S.: Monocular visual–inertial state estimation with online initialization and camera–IMU extrinsic calibration[J]. IEEE Trans Autom Sci Eng. 14(1), 39–51 (2016)

  27. Zhang, J., Singh, S.: LOAM: Lidar Odometry and Mapping in real-time[C]. Robot Sci Syst. 2(9), (2014)

  28. Shan, T., Englot, B.: LeGO-LOAM: Lightweight and ground-optimized lidar odometry and mapping on variable terrain[C]. In: 2018 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp. 4758–4765. IEEE (2018)

  29. Daraei, M.H., VU, A., Manduchi, R.: Velocity and shape from tightly-coupled LiDAR and camera. In: 2017 IEEE Intelligent Vehicles Symposium (IV), pp. 60–67. IEEE (2017)

  30. Behley, J., Stachniss, C.: Efficient surfel-based SLAM using 3D laser range data in urban environments[C]. In: Robotics, Science and Systems (2018)

  31. Samples, M.: James M R. In: Learning a real-time 3D point cloud obstacle discriminator via bootstrapping[C]. In: Workshop on Robotics and Intelligent Transportation System (2010)

  32. Montemerlo, M., Becker, J., Bhat, S., et al.: Junior: The Stanford entry in the urban challenge[J]. In: Journal of Field Robotics. 25(9), 569–597 (2008)

  33. Serafin, J., Olson, E., Grisetti, G.: Fast and robust 3D feature extraction from sparse point clouds[C]. In: 2016 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp. 4105–4112. IEEE (2016)

  34. Bogoslavskyi, I., Stachniss, C.: Fast range image-based segmentation of sparse 3D laser scans for online operation[C]. In: 2016 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp. 163–169. IEEE (2016)

  35. Farhadi, A., Redmon, J.: Yolov3: An incremental improvement[C]. Computer Vision and Pattern Recognition, cite as (2018)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yin-hui Ao.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

HIGHLIGHTS

• An information fusion method of time and space for Camera-Lidar-IMU System.

• Two groups Division for six different degrees of freedom, to obtain pose transformation.

• Factor-Graph optimization is used to design the back-end nonlinear optimizer.

• The method works well in scenes with sparse features, and has low dependence on hardware.

Supplementary Information

(MP4 45387 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Peng, Wz., Ao, Yh., He, Jh. et al. Vehicle Odometry with Camera-Lidar-IMU Information Fusion and Factor-Graph Optimization. J Intell Robot Syst 101, 81 (2021). https://doi.org/10.1007/s10846-021-01329-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-021-01329-x

Keywords

Navigation