A Randomized Framework for Estimating Image Saliency Through Sparse Signal Reconstruction

A Randomized Framework for Estimating Image Saliency Through Sparse Signal Reconstruction

Kui Fu, Jia Li
Copyright: © 2018 |Volume: 9 |Issue: 2 |Pages: 20
ISSN: 1947-8534|EISSN: 1947-8542|EISBN13: 9781522543794|DOI: 10.4018/IJMDEM.2018040101
Cite Article Cite Article

MLA

Fu, Kui, and Jia Li. "A Randomized Framework for Estimating Image Saliency Through Sparse Signal Reconstruction." IJMDEM vol.9, no.2 2018: pp.1-20. http://doi.org/10.4018/IJMDEM.2018040101

APA

Fu, K. & Li, J. (2018). A Randomized Framework for Estimating Image Saliency Through Sparse Signal Reconstruction. International Journal of Multimedia Data Engineering and Management (IJMDEM), 9(2), 1-20. http://doi.org/10.4018/IJMDEM.2018040101

Chicago

Fu, Kui, and Jia Li. "A Randomized Framework for Estimating Image Saliency Through Sparse Signal Reconstruction," International Journal of Multimedia Data Engineering and Management (IJMDEM) 9, no.2: 1-20. http://doi.org/10.4018/IJMDEM.2018040101

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

This article proposes a randomized framework that estimates image saliency through sparse signal reconstruction. The authors simulate the measuring process of ground-truth saliency and assume that an image is free-viewed by several subjects. In the free-viewing process, each subject attends to a limited number of regions randomly selected, and a mental map of the image is reconstructed by using the subject-specific prior knowledge. By assuming that a region is difficult to be reconstructed will become conspicuous, the authors represent the prior knowledge of a subject by a dictionary of sparse bases pre-trained on random images and estimate the conspicuity score of a region according to the activation costs of sparse bases as well as the sparse reconstruction error. Finally, the saliency map of an image is generated by summing up all conspicuity maps obtained. Experimental results show proposed approach achieves impressive performance in comparisons with 16 state-of-the-art approaches.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.