Authors:
Hajira Saleem
1
;
2
;
Reza Malekian
1
;
2
and
Hussan Munir
1
;
2
Affiliations:
1
Department of Computer Science and Media Technology, Malmö University, Malmö, 20506, Sweden
;
2
Internet of Things and People Research Centre, Malmö University, Malmö, 20506, Sweden
Keyword(s):
Visual Odometry, Image Enhancement, Low-Light Images, Localization, Pose Estimation.
Abstract:
Visual odometry is a key component of autonomous vehicle navigation due to its cost-effectiveness and efficiency.
However, it faces challenges in low-light conditions because it relies solely on visual features. To
mitigate this issue, various methods have been proposed, including sensor fusion with LiDAR, multi-camera
systems, and deep learning models based on optical flow and geometric bundle adjustment. While these
approaches show potential, they are often computationally intensive, perform inconsistently under different
lighting conditions, and require extensive parameter tuning. This paper evaluates the impact of image enhancement
models on visual odometry estimation in low-light scenarios. We assess odometry performance on
images processed with gamma transformation and four deep learning models: RetinexFormer, MAXIM, MIRNet,
and KinD++. These enhanced images were tested using two odometry estimation techniques: TartanVO
and Selective VIO. Our findings highlight the importance o
f models that enhance odometry-specific features
rather than merely increasing image brightness. Additionally, the results suggest that improving odometry
accuracy requires image-processing models tailored to the specific needs of odometry estimation. Furthermore,
since different odometry models operate on distinct principles, the same image-processing technique
may yield varying results across different models.
(More)