GR-LOAM: LiDAR-based sensor fusion SLAM for ground robots on complex terrain

https://doi.org/10.1016/j.robot.2021.103759Get rights and content

Highlights

  • Odometer increment model fuses IMU and encoder data to estimate robot pose on manifold.

  • Abnormal sensor data detection and optimization weight adjustment.

  • A multi-sensor ground-constrained refinement method with dynamic objects being removed.

  • Extensive experiments based on a real ground robot in indoor and outdoor environments.

Abstract

Simultaneous localization and mapping is a fundamental process in robot navigation. We focus on LiDAR to complete this process in ground robots traveling on complex terrain by proposing GR-LOAM, a method to estimate robot ego-motion by fusing LiDAR, inertial measurement unit (IMU), and encoder measurements in a tightly coupled scheme. First, we derive a odometer increment model that fuses the IMU and encoder measurements to estimate the robot pose variation on a manifold. Then, we apply point cloud segmentation and feature extraction to obtain distinctive edge and planar features. Moreover, we propose an evaluation algorithm for the sensor measurements to detect abnormal data and reduce their corresponding weight during optimization. By jointly optimizing the cost derived from the LiDAR, IMU, and encoder measurements in a local window, we obtain low-drift odometry even on complex terrain. We use the estimated relative pose in the local window to reevaluate the matching distance across features and remove dynamic objects and outliers, thus refining the features before being fed to a mapping thread and increasing the mapping efficiency. In the back end, GR-LOAM uses the refined point cloud and tightly couples the IMU and encoder measurements with ground constraints to further refine the estimated pose by aligning the features on a global map. Results from extensive experiments performed in indoor and outdoor environments using real ground robot demonstrate the high accuracy and robustness of the proposed GR-LOAM for state estimation of ground robots.

Introduction

Simultaneous localization and mapping (SLAM) is essential for the navigation of ground robots in unknown environments. SLAM allows to estimate the robot pose and provide information for path planning. As cameras mounted on ground robots are usually small, affordable, and easy to install, they are widely adopted for SLAM. In recent years, many studies have achieved high-performance visual SLAM [1], [2], [3], and by fusing measurements from inertial measurement units (IMUs) and cameras, the positioning accuracy and robustness have been further improved [4], [5], [6]. Although visual SLAM has many advantages, camera images are sensitive to lighting, textures, and large variations in the viewing angle. As an active sensing method, LiDAR provides high-quality distance information as point clouds of their surroundings and is insensitive to light, being able to operate even at night. LiDAR odometry and mapping (LOAM) [7], [8] has achieved high performance and remained at the forefront of the KITTI vision benchmark ranking. From this development, many high-performance LiDAR SLAM algorithms have been derived [9], [10], [11].

Although the accuracy and robustness of LiDAR SLAM are higher than those of visual SLAM, some problems remain to be addressed. First, the movement of a ground robot distorts the LiDAR point cloud, leading to errors in point cloud matching. Second, LiDAR can only obtain few features in open environments and captures dynamic objects (e.g., vehicles, pedestrians) for mapping. Third, in environments such as long corridors, most points are located in similar planes, hindering localization and mapping. Fourth, given the slow update rate of LiDAR measurements (around 10 Hz), they cannot satisfy the feedback frequency required by pose controllers during robot navigation (usually above 50 Hz).

As individual sensors tend to flaw in real settings, ground robots are often equipped with multiple sensors (e.g., LiDAR, camera, IMU, encoder), and combining their information may improve the overall system performance. For instance, the IMU can measure angular velocity and linear acceleration, being more robust under dynamic motion and able to assist LiDAR data for point cloud distortion removal. However, the noise and random walk of IMU measurements increase the error over long integration periods. Although an encoder can measure the rotation angle of the robot wheel and allow to estimate the robot displacement, the estimation is only accurate on flat ground. Therefore, the suitability of encoders is limited to applications such as service robots and industrial automatic guided vehicles that are deployed in environments with flat ground. For other applications such as rescue, security, and inspection, robots often face complex terrain that includes slopes, steps, and stairs, as shown in Fig. 1. In such scenarios, sensor fusion comprising LiDAR, IMU, and encoder measurements is challenging. Moreover, given the inherent error of each type of sensor, inappropriate fusion may render the fusion worse than using the sensors separately. Therefore, proper sensor fusion should be devised to improve the overall localization and mapping performance.

We propose GR-LOAM, a method to fuse LiDAR, IMU, and encoder data for accurate and robust SLAM of ground robots on complex terrain. We propose an odometer increment model that allows fusing IMU and encoder data to calculate the robot pose variation on a manifold. In a local sliding window, odometry is then estimated by tightly coupling the LiDAR, IMU, and encoder data, and low-drift results are obtained even in challenging environments. In addition, we refine features by using the odometry results to remove dynamic objects and outliers, ensuring high feature quality before mapping as well as increasing the matching efficiency. Finally, GR-LOAM uses the refined feature points, tightly coupling IMU and encoder data again and including ground constraints to optimize the global robot state by aligning the features with a global map. We can summarize the contributions of the proposed GR-LOAM as follows:

  • An odometer increment model fuses IMU and encoder data to estimate the pose variation on a manifold.

  • Odometry based on factor graphs to tightly couple LiDAR, IMU, and encoder data allows to detect abnormal data and adjust the optimization weight of each sensor.

  • A multi-sensor ground-constrained refinement method further optimizes the pose estimation and generates a global map with dynamic objects being removed.

  • Extensive experiments based on a real ground robot in indoor and outdoor environments demonstrate the accuracy and robustness of the proposed GR-LOAM for localization and mapping of ground robots.

The remainder of this paper is organized as follows. Section 2 presents a survey of related works. Sections 3 Preliminaries, 4 Proposed GR-LOAM detail the proposed GR-LOAM. The experimental setup and results are reported and analyzed in Section 5. Finally, we draw conclusions in Section 6.

Section snippets

Related work

Various LiDAR-based methods have been proposed to perform robot state estimation. For instance, the iterative closest point algorithm for scan matching can establish the basis for LiDAR odometry [12]. In fact, many SLAM studies (e.g., [13]) have been based on this algorithm and its variations [14], [15]. IMLS-SLAM [16] proposed a novel point cloud sampling strategy to extract effective laser points. The Implicit Moving Least Squares (IMLS) surface representation and scan-to-model matching is

Frames and notation

The coordinate systems to implement the proposed GR-LOAM are shown in Fig. 2. We assume that a ground robot navigates in a reference world coordinate system W. The origin of the encoder coordinate system is the geometric center of the robot chassis, and the robot moves along the positive x axis. Let pABk and RABk represent the position and orientation of coordinate system B with respect to A at time k, respectively. We define the state vector as χWk=pWOk,qWOk,vWOk,ba,bgTTOL=pOL,qOLT,where pWOk,

System overview

The proposed GR-LOAM framework with its four threads is illustrated in Fig. 3. The first thread performs sensor data synchronization and preprocessing to obtain the pre-integration of IMU and encoder measurements and features from the LiDAR point cloud. The GR-odometry thread estimates the pose variation of the robot by fusing LiDAR, IMU, and encoder data and considers ground constrains to detect abnormal data and adjust the optimization weight of each sensor. We refine feature points by using

Experiments and results

To the best of our knowledge, no public dataset is available containing LiDAR point clouds with corresponding IMU and encoder data for ground robots in complex terrain. Therefore, based on the ground rescue robot, we have established a complete perception system and collected datasets in different environments. The terrain conditions include long indoor corridors, outdoor roads, slopes, stairs, lawn, dynamic objects, wheels slipping, and other elements, as listed in Table 1. And for the

Conclusion

We propose GR-LOAM for ground robots on complex terrain. This framework can estimate the robot state by fusing LiDAR, IMU, and encoder measurements in a tightly coupled scheme. By adequate processing and use of multi-sensor measurements, GR-LOAM achieves high performance even in challenging environments. The novel odometer increment model that fuses IMU and encoder measurements can predict the robot state and provide robust constraints for estimation even under low-quality LiDAR measurements.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (U20A20201) and LiaoNing Revitalization Talents Program, China (XLYC1807018).

Yun Su received the B.S. degree in Department of Automation from Chang’an University of China in 2015. He is currently pursuing the Ph.D. degree with the Shenyang Institute of Automation, Chinese Academy of Sciences. His current research interests include robot collaboration control, multi-sensor fusion and simultaneous localization and mapping. Email: [email protected].

References (37)

  • Sánchez-BelenguerC. et al.

    Global matching of point clouds for scan registration and loop detection

    Robot. Auton. Syst.

    (2020)
  • Mur-ArtalR. et al.

    ORB-SLAM: a versatile and accurate monocular SLAM system

    IEEE Trans. Robot.

    (2015)
  • EngelJ. et al.

    Direct sparse odometry

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2017)
  • ForsterC. et al.

    SVO: Fast semi-direct monocular visual odometry

  • LeuteneggerS. et al.

    Keyframe-based visual–inertial odometry using nonlinear optimization

    Int. J. Robot. Res.

    (2015)
  • Mur-ArtalR. et al.

    Visual-inertial monocular SLAM with map reuse

    IEEE Robot. Autom. Lett.

    (2017)
  • QinT. et al.

    Vins-mono: A robust and versatile monocular visual-inertial state estimator

    IEEE Trans. Robot.

    (2018)
  • ZhangJ. et al.

    LOAM: Lidar Odometry and Mapping in Real-time

    Robot. Sci. Syst.

    (2014)
  • ZhangJ. et al.

    Low-drift and real-time lidar odometry and mapping

    Auton. Robots

    (2017)
  • ShanT. et al.

    Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain

  • YeH. et al.

    Tightly coupled 3d lidar inertial odometry and mapping

  • LinJ. et al.

    Loam_livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV

    (2019)
  • BeslP.J. et al.

    Method for registration of 3-D shapes, Sensor fusion IV: control paradigms and data structures

    Int. Soc. Opt. Photonics

    (1992)
  • MendesE. et al.

    ICP-based pose-graph SLAM

  • RusinkiewiczS. et al.

    Efficient variants of the ICP algorithm

  • SegalA. et al.

    Generalized-icp

    Robot. Sci. Syst.

    (2009)
  • DeschaudJ.E.

    IMLS-SLAM: scan-to-model matching based on 3D data

  • TangJ. et al.

    Lidar scan matching aided inertial navigation system in gnss-denied environments

    Sensors

    (2015)
  • Cited by (43)

    • Adaptive estimation of UAV altitude in complex indoor environments using degraded and time-delayed measurements with time-varying uncertainties

      2023, Robotics and Autonomous Systems
      Citation Excerpt :

      In [31], the authors proposed an approach for estimating the localizability inside tunnel-like environments from LiDAR and Ultra-wideband (UWB) data by evaluating the strength of the optimization constraints with respect to the measurements and calculating a localizability vector. In [32], the authors propose a LiDAR-based SLAM algorithm for ground robots traveling on complex terrain and calculate weights for optimization from the position variations of the LiDAR scans and IMU measurements, utilizing the fact that the displacement obtained from LiDAR scan matching tends to zero as the measurements degrade. In our work, we take advantage of the method proposed in [7] to determine whether the output of a 3D LiDAR SLAM is usable for reliable localization.

    View all citing articles on Scopus

    Yun Su received the B.S. degree in Department of Automation from Chang’an University of China in 2015. He is currently pursuing the Ph.D. degree with the Shenyang Institute of Automation, Chinese Academy of Sciences. His current research interests include robot collaboration control, multi-sensor fusion and simultaneous localization and mapping. Email: [email protected].

    Ting Wang received the B.S degree in Department of Automation from Dalian University of Technology in 2001. In 2007, he graduated from Shenyang Institute of Automation, majoring in pattern recognition and intelligent systems, and obtained the Ph.D. degree. Since then he has been working in the State Key Laboratory of Robotics at Shenyang Institute of Automation. His research interests include robot control, special robot technology and pattern recognition and intelligent system. Email: [email protected].

    Shiliang Shao receive the B.S degree in Department of Electronic Information Engineering from Southwest University in 2011, and received the M.S degree in Department of Information Science and Engineering from Northeast University in 2013. Since then, he has been working in State Key Laboratory of Robotics at Shenyang Institute of Automation. From 2016 to 2020, he is studying for a Ph.D. at Northeastern University. He received the Ph.D. degree in 2020. His research interests include special robot technology, physiological signal analysis, and intelligent systems. Email: [email protected].

    Chen Yao received the B.S degree in Engineering from Southeast University in 1985. Since then, he has been working in the State Key Laboratory of Robotics at Shenyang Institute of Automation, where he is currently a Professor. He has presided over several National Natural Science Foundation projects and has long been engaged in research on special robot control and related fields. Email: [email protected].

    Zhidong Wang received the B.S degree in Control Engineering from Beijing University of Aeronautics and Astronautics in 1987. He received the Ph.D. degree in Mechanical Engineering from Tohoku University of Japan in 1995. He is now a Professor in Advanced Robotics Department of Chiba Institute of Technology, Japan. His research interests include intelligent systems and dynamic control theory, Multi-robot self-coordinated control system and Implementation of human–computer interaction type cooperative operation. Email: [email protected].

    View full text