Abstract:
Pixel-wise depth completion using multi-sensor fusion is crucial in areas such as autonomous driving. While LiDAR and image fusion methods exhibit reliability, it can fac...Show MoreMetadata
Abstract:
Pixel-wise depth completion using multi-sensor fusion is crucial in areas such as autonomous driving. While LiDAR and image fusion methods exhibit reliability, it can face challenges in adverse weather conditions, such as rain and fog. In contrast, mmWave radar, emerged in recent years, has stronger anti-interference capability. However, radar point typically features high sparsity. And mmWave radar has lower resolution in the height dimension, leading to increased errors when projected onto the image plane. To solve the problem, this paper proposes a two-stage fusion convolutional neural network. In the first stage, image features are utilized to filter the noisy radar point cloud and learn the mapping of radar points to image regions. In the second stage, we perform multiscale fusion of the image with the coarse depth map generated in the first stage to predict the missing depth values. Experiment results indicate that our improved strategy reduces the error of depth value estimation. Our network shows a 4.5% improvement in RMSE(root-mean-square error) compared to the previous method.
Date of Conference: 08-11 July 2024
Date Added to IEEE Xplore: 11 October 2024
ISBN Information: