Abstract:
2-D-to-3-D conversion is an important step for obtaining 3-D videos, as a variety of monocular depth cues have been explored to generate 3-D videos from 2-D videos. As in...Show MoreMetadata
Abstract:
2-D-to-3-D conversion is an important step for obtaining 3-D videos, as a variety of monocular depth cues have been explored to generate 3-D videos from 2-D videos. As in a human brain, a fusion of these monocular depth cues can regenerate 3-D data from 2-D data. By mimicking how our brains generate depth perception, we propose a reliability-based fusion of multiple depth cues for an automatic 2-D-to-3-D video conversion. A series of comparisons between the proposed framework and the previous methods is also presented. It shows that significant improvement is achieved in both subjective and objective experimental results. From the subjective viewpoint, the brain-inspired framework outperforms earlier conversion methods by preserving more reliable depth cues. Moreover, an enhancement of 0.70-3.14 dB and 0.0059-0.1517 in the perceptual quality of the videos is realized in terms of the objective-modified peak signal-to-noise ratio and disparity distortion model, respectively.
Published in: IEEE Transactions on Circuits and Systems for Video Technology ( Volume: 23, Issue: 7, July 2013)