Loading [MathJax]/extensions/MathMenu.js
Depth Estimation From a Single Image Using Deep Learned Phase Coded Mask | IEEE Journals & Magazine | IEEE Xplore

Depth Estimation From a Single Image Using Deep Learned Phase Coded Mask


Abstract:

Depth estimation from a single image is a well-known challenge in computer vision. With the advent of deep learning, several approaches for monocular depth estimation hav...Show More

Abstract:

Depth estimation from a single image is a well-known challenge in computer vision. With the advent of deep learning, several approaches for monocular depth estimation have been proposed, all of which have inherent limitations due to the scarce depth cues that exist in a single image. Moreover, these methods are very demanding computationally, which makes them inadequate for systems with limited processing power. In this paper, a phase-coded aperture camera for depth estimation is proposed. The camera is equipped with an optical phase mask that provides unambiguous depth-related color characteristics for the captured image. These are used for estimating the scene depth map using a fully convolutional neural network. The phase-coded aperture structure is learned jointly with the network weights using backpropagation. The strong depth cues (encoded in the image by the phase mask, designed together with the network weights) allow a much simpler neural network architecture for faster and more accurate depth estimation. Performance achieved on simulated images as well as on a real optical setup is superior to the state-of-the-art monocular depth estimation methods (both with respect to the depth accuracy and required processing power), and is competitive with more complex and expensive depth estimation methods such as light-field cameras.
Published in: IEEE Transactions on Computational Imaging ( Volume: 4, Issue: 3, September 2018)
Page(s): 298 - 310
Date of Publication: 20 June 2018

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.