Full Length ArticleEFA-BMFM: A multi-criteria framework for the fusion of colour image segmentation
Introduction
The focus of image segmentation is to divide an image into separate regions which have uniform and homogeneous attributes [1]. This step is crucial and important in higher-level tasks such as feature extraction, pattern recognition, and target detection [2]. Several promising methods for segmentation of textured natural images have been recently proposed and reported in the literature. Of those, the ones which are based on the combination of multiple and weak segmentations of the same image to improve the quality of segmentation results are appealing from a theoretical perspective and offer an effective compromise between the complexity of the segmentation model and its efficiency.
Most of these approaches, which are used to compute the segmentation fusion result from a set of initial and weak putative segmentation maps, are theoretically based on the notion of median partition. According to a given specific criterion (which can also be expressed as a distance or a similarity index/measure between two segmentation maps), the median partition approach aims to minimize the average of the distances (or to maximize the average of similarity measures), separating the (consensus) solution from the other segmentations to be fused. To date, a large and growing number of fusion-segmentation approaches based on the result of the median partition problem, along with different criteria or different optimization strategies, have been proposed in the literature.
For example, a fusion model of weak segmentations was initially introduced in the evidence accumulation sense in [3] with a co-association matrix, and in [4], it is then based on a minimization of the inertia (or intra-cluster scatter) criterion across cluster instances (represented by the set of local re-quantized label histogram given by each input segmentation to be fused). The fusion of multiple segmentation maps has also been proposed with respect to the Rand Index (RI) criterion (or its probabilistic version), with either a stochastic constrained random walking technique [5] (within a mutual information-based estimator to assess the optimal number of regions), an algebraic optimization method [6], a Bayesian Markov random field model [7], a superpixel-based approach optimized by the expectation maximization procedure [8] or finally, according to a similarity distance function built from the adjusted RI [9] and optimized with a stochastic gradient descent. It should also be noted that the solution of the median partition problem can be determined according to an entropy criterion, either in the variation of information (VoI) sense [10], using a linear complexity and energy-based model optimized by an iterative steepest-local energy descent strategy combined with a spatial connectivity constraint, or in the mutual information sense [11] using expectation maximization (EM) optimization. The fusion of clustering results can also be carried out according to the global consistency criterion (GCE) [12] (a perceptual measure which takes into account the inherent multiscale nature of an image segmentation by measuring the level of refinement existing between two spatial partitions) or based on the precision-recall criterion [13] using a hierarchical relaxation scheme. In this context, Franek et al. [14] proposed a methodology allowing the use of virtually any ensemble clustering method to address the problem of image-segmentation combination. The strategy is mainly based on a pre-processing step which estimates a superpixel map from the segmentation ensemble in order to reduce the dimensionality of the combinatorial problem. Finally, in remote sensing, there have been reports of the combining model based on the maximum-margin sense (of the hyperplanes between spatial clusters) [15] or the recent Bayesian fusion procedure proposed in [16], in which the class labels obtained from different segmentation maps (obtained from different sensors) are fused by the weights of the evidence model.
In fact, the performance of these energy-based fusion models is related both to the optimization procedure, with its potential ability to find an optimal solution (as quickly as possible), and it also largely depends on the chosen fusion criterion, which defines all the intrinsic properties of the consensus segmentation map to be estimated. However, by assuming that an efficient optimization procedure is designed and implemented (in terms of its ability to quickly find a global optimal and stable solution), it remains unclear whether it can find the most appropriate single criterion allowing both to extract all the useful information contained in the segmentation ensemble and also to model all the complex geometric properties of the final consensus segmentation map. Another way to look at this problem is to understand that if the optimization problem is based on the optimization of a single criterion, the fusion procedure is inherently biased towards searching one particular family of possible solutions; otherwise, some specific regions of the search space contain solutions, which are a priori defined (by the criterion), as acceptable solutions. This may bias and limit the performance of an image segmentation model. To overcome this main disadvantage (the bias caused by a single criterion), we propose an interesting solution to use approaches based on multi-objective optimization in order to design a new fusion-segmentation model which takes advantage of the (potential) complementarity of different objectives (criteria), and enables us to finally obtain a better consensus segmentation result. Following this new strategy, in this work, we introduce a new multi-criteria fusion model weighted by an entropy-based confidence measure (EFA-BMFM). The main goal of this model is to simultaneously combine and optimize two different and complementary segmentation-fusion criteria, namely, the (region-based) VoI criterion and the (contour-based) F-measure (derived from the precision-recall) criterion.
The remainder of the paper is organized as follows. In Section 2, we present basic concepts of multi-objective optimization. In Section 3, we describe the generation of the segmentation ensemble to be fused by our model, while in Section 4, we describe the proposed fusion model, i.e., the used segmentation criteria, the multi-objective function and the optimization strategy of the proposed algorithm for the fusion of image segmentation. We explain the experiments and discussions in Section 5, and in Section 6, we conclude the paper.
Section snippets
Multi-objective optimization
The motivation of using multi-objective (MO) optimization comes from all the drawbacks and limitations of using a mono-objective one, as mentioned in our preliminary work [17]. As previously mentioned, the final segmentation solution is inherently biased by the chosen single criterion as well as by the parameters of the model and the possible outliers of the segmentation ensemble. A MO optimization-based segmentation fusion framework enables us to more efficiently extract the useful information
Generation of the initial segmentations
In our application, it is simple to acquire the initial segmentations (see Fig. 2) used by our fusion Framework. To do this, we employ a K-means [20] clustering technique, with an image expressed in 12 different colour spaces,1
Region-based VoI criterion
The VoI [25] is an information theoretic criterion used for comparing two segmentations (partitions) or clusterings. By measuring the amount of information which is lost or gained while switching from one clustering to another, this metric aims to quantify the information shared between two partitions. In particular, the VoI takes a value of 0 when two clusterings are identical, but ≤ 1 otherwise. Similarly, it also expresses roughly the amount of randomness in one segmentation which cannot be
Data set and benchmarks
In order to measure the performance of the proposed fusion model, we validate our approach using the famous Berkeley segmentation database (BSDS300) [37]. Recently, this dataset has been enriched to BSDS5003 [38] with 200 additional test colour images of size 481 × 321. In order to quantify the efficacy of the proposed segmentation algorithm, for each
Conclusion
In this paper, we present a new and efficient multi-criteria fusion model based on the entropy-weighted formula approach (EFA-BMFM). The proposed model combines multiple segmentation maps to achieve a final improved segmentation result. This model is based on two complementary (contour and region-based) criteria of segmentation (the VoI and the F-measure criteria). We applied the proposed segmentation model to BSDS300, BSDS500, ASD and medical images, and the proposed model appears to be
References (70)
- et al.
SAR image multiclass segmentation using a multiscale and multidirection triplet Markov fields model in non-subsampled contourlet transform domain
Inf. Fusion
(2013) - et al.
Interactive colour image segmentation via iterative evidential labeling
Inf. Fusion
(2014) A label field fusion model with a variation of information estimator for image segmentation
Inf. Fusion
(2014)- et al.
Bayesian image segmentation fusion
Knowl. Based Syst.
(2014) - et al.
Multisensor data fusion: a review of the state-of-the-art
Inf. Fusion
(2013) - et al.
Multi-focus image fusion with dense SIFT
Inf. Fusion
(2015) Comparing clusterings - an information based distance
J. Multivar. Anal.
(2007)- et al.
A Markov random field image segmentation model for colour textured images
Image Vision Comput.
(2006) - et al.
Unsupervised segmentation of natural images via lossy data compression
Comput. Vision Image Understanding
(2008) - et al.
Inter-company comparison using modified TOPSIS with objective weights
Comput. Oper. Res.
(2000)