Abstract:
In this work, we present a vision-only global localization architecture for autonomous vehicle applications, and achieves centimeter-level accuracy and high robustness in...Show MoreMetadata
Abstract:
In this work, we present a vision-only global localization architecture for autonomous vehicle applications, and achieves centimeter-level accuracy and high robustness in various scenarios. We first apply pixel-wise segmentation to the front-view mono camera and extract the semantic features, e.g. pole-like objects, lane markings, and curbs, which are robust to illumination, viewing angles and seasonal changes. For the scenes without enough semantic information, we extract interest feature points on static backgrounds, such as ground surface and buildings, assisted by our semantic segmentation. We create the visual global map with semantic feature map layers extracted from LiDAR point-cloud semantic map and the point feature map layer built with a fixed-pose SFM. A lumped Levenberg-Marquardt optimization solver is then applied to minimize the cost from two types of observations. We further evaluate the accuracy and robustness of our method with road tests on Alibaba's autonomous delivery vehicles in multiple scenarios as well as a KAIST urban dataset.
Date of Conference: 24 October 2020 - 24 January 2021
Date Added to IEEE Xplore: 10 February 2021
ISBN Information: