Abstract:
This article introduces a robust method for accurately perceiving failures and calibrating multiline light detection and ranging (LiDAR) and cameras in natural environmen...Show MoreMetadata
Abstract:
This article introduces a robust method for accurately perceiving failures and calibrating multiline light detection and ranging (LiDAR) and cameras in natural environments in an online setting. Traditional target-free calibration methods rely on matching the spatial structures of 3-D point clouds with image features. However, obtaining dense point cloud data in a short amount of time for matching and optimization is challenging in online applications. To address this, our method uses single-frame sparse LiDAR point clouds for robust feature extraction and matching, with further optimization through contextual observation. Moreover, our approach is capable of perceiving and recalibrating extrinsic errors in online natural scenes, thus enhancing the calibration’s robustness. We demonstrate the robustness and generalizability of our method using our own datasets LIVOX-Road, with evaluation results indicating subpixel accuracy. The code is released at: https://github.com/JMU-Robot/LiDAR-Camera-Online-Calibration.
Published in: IEEE Transactions on Instrumentation and Measurement ( Volume: 73)