Loading [a11y]/accessibility-menu.js
Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model | IEEE Journals & Magazine | IEEE Xplore

Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model


Abstract:

In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database a...Show More

Abstract:

In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).
Published in: IEEE Transactions on Image Processing ( Volume: 26, Issue: 10, October 2017)
Page(s): 4684 - 4696
Date of Publication: 28 June 2017

ISSN Information:

PubMed ID: 28678707

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.