Abstract:
In this paper, a new bottom-up visual saliency model is proposed. Based on the idea that locally contrasted and globally rare features are salient, this model will be cal...Show MoreMetadata
Abstract:
In this paper, a new bottom-up visual saliency model is proposed. Based on the idea that locally contrasted and globally rare features are salient, this model will be called “RARE” in the following sections. It uses a sequential bottom-up features extraction where first low-level features as luminance and chrominance are computed and from those results medium-level features as image orientations are extracted. A qualitative and a quantitative comparison are achieved on a 120 images dataset. The RARE algorithm powerfully predicts human fixations compared with most of the freely available saliency models.
Date of Conference: 30 September 2012 - 03 October 2012
Date Added to IEEE Xplore: 21 February 2013
ISBN Information: