Loading [MathJax]/extensions/MathMenu.js
Deep3DSaliency: Deep Stereoscopic Video Saliency Detection Model by 3D Convolutional Networks | IEEE Journals & Magazine | IEEE Xplore

Deep3DSaliency: Deep Stereoscopic Video Saliency Detection Model by 3D Convolutional Networks


Abstract:

Stereoscopic saliency detection plays an important role in various stereoscopic video processing applications. However, conventional stereoscopic video saliency detection...Show More

Abstract:

Stereoscopic saliency detection plays an important role in various stereoscopic video processing applications. However, conventional stereoscopic video saliency detection methods mainly use independent low-level features instead of extracting them automatically, and thus, they ignore the intrinsic relationship between the spatial and temporal information. In this paper, we propose a novel stereoscopic video saliency detection method based on 3D convolutional neural networks, namely, deep 3D video saliency (Deep3DSaliency). The proposed network consists of two sub-models: spatiotemporal saliency model (STSM) and stereoscopic saliency aware model (SSAM). STSM directly takes three consecutive video frames as the input to extract visual spatiotemporal features, while SSAM attempts to further infer the depth and semantic features from the left and right video frames by shared parameters from STSM. The visual spatiotemporal features from STSM and the depth and semantic features from SSAM are learned by an alternating optimization scheme. Finally, all these saliency-related features are combined together for the final stereoscopic saliency detection via 3D deconvolution. Experimental results show the superior performance of the proposed model over other existing ones in saliency estimation for 3D video sequences.
Published in: IEEE Transactions on Image Processing ( Volume: 28, Issue: 5, May 2019)
Page(s): 2305 - 2318
Date of Publication: 05 December 2018

ISSN Information:

PubMed ID: 30530363

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.