Loading [MathJax]/extensions/MathMenu.js
LD-Net: A Lightweight Network for Real-Time Self-Supervised Monocular Depth Estimation | IEEE Journals & Magazine | IEEE Xplore

LD-Net: A Lightweight Network for Real-Time Self-Supervised Monocular Depth Estimation


Abstract:

Self-supervised monocular depth estimation from video sequences is promising for 3D environments perception. However, most existing methods use complicated depth networks...Show More

Abstract:

Self-supervised monocular depth estimation from video sequences is promising for 3D environments perception. However, most existing methods use complicated depth networks to realize monocular depth estimation, which are often difficultly applied to resource-constrained devices. To solve this problem, in this letter, we propose a novel encoder-decoder-based lightweight depth network (LD-Net). Briefly speaking, the encoder is composed of six efficient downsampling units and the Atrous Spatial Pyramid Pooling (ASPP) module. The decoder consists of some novel upsampling units that adopt the sub-pixel convolutional layer (SP). Experiments tested on the KITTI dataset show that the proposed LD-Net can reach nearly 150 frames per second (FPS) on GPU, and remarkably decreases the model parameters while maintaining competitive accuracy compared with other state-of-the-art self-supervised monocular depth estimation methods.
Published in: IEEE Signal Processing Letters ( Volume: 29)
Page(s): 882 - 886
Date of Publication: 18 March 2022

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.