5 September 2014 Video quality assessment using visual attention computational models
Welington Y. Akamine, Mylène C. Farias
Author Affiliations +
Abstract
A recent development in the area of image and video quality consists of trying to incorporate aspects of visual attention in the design of visual quality metrics, mostly using the assumption that visual distortions appearing in less salient areas might be less visible and, therefore, less annoying. This research area is still in its infancy and results obtained by different groups are not yet conclusive. Among the works that have reported some improvements, most use subjective saliency maps, i.e., saliency maps generated from eye-tracking data obtained experimentally. Other works address the image quality problem, not focusing on how to incorporate visual attention into video signals. We investigate the benefits of incorporating bottom-up video saliency maps (obtained using Itti’s computational model) into video quality metrics. In particular, we compare the performance of four full-reference video quality metrics with their modified versions, which had saliency maps incorporated into the algorithm. Results show that the addition of video saliency maps improve the performance of most quality metrics tested, but the highest gains were obtained for the metrics that only took into consideration spatial degradations.
© 2014 SPIE and IS&T 0091-3286/2014/$25.00 © 2014 SPIE and IS&T
Welington Y. Akamine and Mylène C. Farias "Video quality assessment using visual attention computational models," Journal of Electronic Imaging 23(6), 061107 (5 September 2014). https://doi.org/10.1117/1.JEI.23.6.061107
Published: 5 September 2014
Lens.org Logo
CITATIONS
Cited by 16 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Reconstruction algorithms

Video

3D acquisition

Calibration

Visualization

3D image reconstruction

Back to Top