Abstract:
Localization for mobile robots in dynamic, large-scale environments is a challenging task, especially when relying solely on odometry and 2D LIDAR data. When operating in...Show MoreMetadata
Abstract:
Localization for mobile robots in dynamic, large-scale environments is a challenging task, especially when relying solely on odometry and 2D LIDAR data. When operating in fleets, mutual detection and the exchange of localization information can be highly valuable. Detecting and classifying different robot types in a heterogeneous fleet, however, is nontrivial with 2D LIDAR data due to the sparse observation information. In this paper a novel approach for mutual robot detection, classification and relative pose estimation based on a combination of convolutional and ConvLSTM layers is presented in order to solve this issue. The algorithm learns an end-to-end classification and pose estimation of robot shapes using 2D LIDAR information transformed into a grid-map. Subsequently a mixture model representing the probability distribution of the pose measurement for each robot type is extracted out of the heatmap output of the network. The output is then used in a cloud-based collaborative localization system in order to increase the localization of the individual robots. The effectiveness of our approach is demonstrated in both, simulation and real-world experiments. The results of our evaluation show that the classification network is able to achieve a precision of 90% on real-world data with an average position estimation error of 14 cm. Moreover, the collaborative localization system is able to increase the localization accuracy of a robot equipped with low-cost sensors by 63%.
Date of Conference: 03-08 November 2019
Date Added to IEEE Xplore: 28 January 2020
ISBN Information: