Paper
17 March 2017 Visual attention in egocentric field-of-view using RGB-D data
Author Affiliations +
Proceedings Volume 10341, Ninth International Conference on Machine Vision (ICMV 2016); 103410T (2017) https://doi.org/10.1117/12.2268617
Event: Ninth International Conference on Machine Vision, 2016, Nice, France
Abstract
Most of the existing solutions predicting visual attention focus solely on referenced 2D images and disregard any depth information. This aspect has always represented a weak point since the depth is an inseparable part of the biological vision. This paper presents a novel method of saliency map generation based on results of our experiments with egocentric visual attention and investigation of its correlation with perceived depth. We propose a model to predict the attention using superpixel representation with an assumption that contrast objects are usually salient and have a sparser spatial distribution of superpixels than their background. To incorporate depth information into this model, we propose three different depth techniques. The evaluation is done on our new RGB-D dataset created by SMI eye-tracker glasses and KinectV2 device.
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Veronika Olesova, Wanda Benesova, and Patrik Polatsek "Visual attention in egocentric field-of-view using RGB-D data", Proc. SPIE 10341, Ninth International Conference on Machine Vision (ICMV 2016), 103410T (17 March 2017); https://doi.org/10.1117/12.2268617
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Visualization

Data modeling

RGB color model

Visual process modeling

3D image processing

3D modeling

Glasses

RELATED CONTENT


Back to Top