Abstract:
Superpixels recently gained in importance in image segmentation and classification problems. In scene labeling the image is initially segmented into visually consistent s...Show MoreMetadata
Abstract:
Superpixels recently gained in importance in image segmentation and classification problems. In scene labeling the image is initially segmented into visually consistent small regions using a superpixel algorithm; then, superpixels are parsed into different classes. Classification performance heavily depends on the properties and parametric settings of the superpixel algorithm in use. In this paper, a method is proposed to improve scene labeling accuracy by fusing at classifier level the results of multiple superpixel segmentations. First, likelihood ratios are determined for superpixel labels using simple, nonparametric SuperParsing algorithm, which requires no training. Then, final scene segmentation and labeling is performed by pixel-level fusion of the likelihood ratios that are computed for alternative superpixel segmentation scenarios. The proposed method is tested on the SIFT Flow dataset consisting of 2,688 images and 33 labels, and is shown to outperform SuperParsing in terms of classification accuracy.
Date of Conference: 16-19 May 2015
Date Added to IEEE Xplore: 22 June 2015
Electronic ISBN:978-1-4673-7386-9
Print ISSN: 2165-0608