Loading [a11y]/accessibility-menu.js
Unsupervised Monocular Depth Estimation for Monocular Visual SLAM Systems | IEEE Journals & Magazine | IEEE Xplore

Unsupervised Monocular Depth Estimation for Monocular Visual SLAM Systems


Abstract:

Estimating monocular depth and ego-motion via unsupervised learning has emerged as a promising approach in autonomous driving, mobile robots, and augmented reality (AR)/V...Show More

Abstract:

Estimating monocular depth and ego-motion via unsupervised learning has emerged as a promising approach in autonomous driving, mobile robots, and augmented reality (AR)/VR applications. It avoids intensive efforts to collect a large amount of ground truth and further improves the scene construction density and long-term tracking accuracy in simultaneous localization and mapping (SLAM) systems. However, existing approaches are susceptible to illumination variations and blurry pictures due to fast movements in real-world driving scenarios. In this article, we propose a novel unsupervised learning framework to fuse the complementary strength of visual and inertial measurements for monocular depth estimation. It learns both forward and backward inertial sequences at multiple subspaces to produce environment-independent and scale-consistent motion features and selectively weights inertial and visual modalities to adapt to various scenes and motion states. In addition, we explore a novel virtual stereo model to adopt such depth estimates in the monocular SLAM system, thus improving the system efficiency and accuracy. Extensive experiments on the KITTI, EuRoC, and TUM datasets have shown our effectiveness in terms of monocular depth estimation, SLAM initialization efficiency, and pose estimation accuracy compared with the state-of-the-art.
Article Sequence Number: 2502613
Date of Publication: 13 December 2023

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.