Abstract:
Accurate LiDAR-camera online calibration is critical for modern autonomous vehicles and robot platforms. Dominant methods heavily rely on hand-crafted features, which are...Show MoreMetadata
Abstract:
Accurate LiDAR-camera online calibration is critical for modern autonomous vehicles and robot platforms. Dominant methods heavily rely on hand-crafted features, which are not scalable in practice. With the increasing popularity of deep learning (DL), a few recent efforts have demonstrated the advantages of DL for feature extraction on this task. However, their reported performances are not sufficiently satisfying yet. We believe one improvement can be the problem formulation with proper consideration of the underneath geometry. Besides, existing online calibration methods focus on optimizing the calibration error while overlooking the tolerance within the error bounds. To address the research gap, a DL-based LiDAR-camera calibration method, named as the RGGNet, is proposed by considering the Riemannian geometry and utilizing a deep generative model to learn an implicit tolerance model. When evaluated on KITTI benchmark datasets, the proposed method demonstrates superior performances to the state-of-the-art DL-based methods. The code will be publicly available at https://github.com/KleinYuan/RGGNet.
Published in: IEEE Robotics and Automation Letters ( Volume: 5, Issue: 4, October 2020)