To read this content please select one of the options below:

Laser 3D tightly coupled mapping method based on visual information

Sixing Liu (School of Mechanical Engineering, Yangzhou University, Yangzhou, China)
Yan Chai (School of Mechanical Engineering, Yangzhou University, Yangzhou, China)
Rui Yuan (School of Mechanical Engineering, Yangzhou University, Yangzhou, China)
Hong Miao (School of Mechanical Engineering, Yangzhou University, Yangzhou, China)

Industrial Robot

ISSN: 0143-991x

Article publication date: 7 April 2023

Issue publication date: 16 November 2023

100

Abstract

Purpose

Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments. Existing algorithms are based on laser or visual odometry; however, the lidar sensing range is small, the amount of data features is small, the camera is vulnerable to external conditions and the localization and map building cannot be performed stably and accurately using a single sensor. This paper aims to propose a laser three dimensions tightly coupled map building method that incorporates visual information, and uses laser point cloud information and image information to complement each other to improve the overall performance of the algorithm.

Design/methodology/approach

The visual feature points are first matched at the front end of the method, and the mismatched point pairs are removed using the bidirectional random sample consensus (RANSAC) algorithm. The laser point cloud is then used to obtain its depth information, while the two types of feature points are fed into the pose estimation module for a tightly coupled local bundle adjustment solution using a heuristic simulated annealing algorithm. Finally, the visual bag-of-words model is fused in the laser point cloud information to establish a threshold to construct a loopback framework to further reduce the cumulative drift error of the system over time.

Findings

Experiments on publicly available data sets show that the proposed method in this paper can match its real trajectory well. For various scenes, the map can be constructed by using the complementary laser and vision sensors, with high accuracy and robustness. At the same time, the method is verified in a real environment using an autonomous walking acquisition platform, and the system loaded with the method can run well for a long time and take into account the environmental adaptability of multiple scenes.

Originality/value

A multi-sensor data tight coupling method is proposed to fuse laser and vision information for optimal solution of the positional attitude. A bidirectional RANSAC algorithm is used for the removal of visual mismatched point pairs. Further, oriented fast and rotated brief feature points are used to build a bag-of-words model and construct a real-time loopback framework to reduce error accumulation. According to the experimental validation results, the accuracy and robustness of the single-sensor SLAM algorithm can be improved.

Keywords

Acknowledgements

This work is partially supported by the National Characteristic vegetable Industrial technology system Post expert Project (Grant No. CARS-24-D-03), the Natural Science Foundation of Jiangsu Province (Grant No. BK20170500), the National Cooperation Project of Jiangsu Province (Grant No. BZ2021079).

Citation

Liu, S., Chai, Y., Yuan, R. and Miao, H. (2023), "Laser 3D tightly coupled mapping method based on visual information", Industrial Robot, Vol. 50 No. 6, pp. 917-929. https://doi.org/10.1108/IR-02-2023-0016

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Emerald Publishing Limited

Related articles