Elsevier

Information Fusion

Volume 38, November 2017, Pages 104-121
Information Fusion

Full Length Article
EFA-BMFM: A multi-criteria framework for the fusion of colour image segmentation

https://doi.org/10.1016/j.inffus.2017.03.001Get rights and content

Highlights

  • Fusion of image segmentations using single criterion can limit the performance.

  • A multi-criteria fusion model based on the entropy-weighted formula is proposed.

  • This new approach aims to overcome the bias caused by using single criterion.

  • The proposed model combines two conflicting and complementary criteria.

  • An extended local optimization procedure based on super-pixels is introduced.

Abstract

Considering the recent progress in the development of practical applications in the field of image processing, it is increasingly important to develop new, efficient and more reliable algorithms to solve an image segmentation problem. To this end, various fusion-based segmentation approaches which use consensus clustering, and which are based on the optimization of a single criterion, have been proposed. One of the greatest challenges with these approaches is to select the best fusion criterion, which gives the best performance for the image segmentation model. In this paper, we propose a new fusion model of image segmentation based on multi-objective optimization, which aims to overcome the limitation and bias caused by a single criterion, and to provide a final improved segmentation. To address the ill-posedness for the search of the best criterion, the proposed fusion model combines two conflicting and complementary criteria for segmentation fusion, namely, the region-based variation of information (VoI) criterion and the contour-based F-measure (precision-recall) criterion using an entropy-based confidence weighting factor. To optimize our energy-based model, we propose an extended local optimization procedure based on superpixels and derived from the iterative conditional mode (ICM) algorithm. This new multi-objective median partition-based approach, which relies on the fusion of inaccurate, quick and spatial clustering results, has emerged as an appealing alternative to the use of traditional segmentation fusion models which exist in the literature. We perform experiments using the Berkeley database with manual ground truth segmentations, and the results clearly show the feasibility and efficiency of the proposed methodology.

Introduction

The focus of image segmentation is to divide an image into separate regions which have uniform and homogeneous attributes [1]. This step is crucial and important in higher-level tasks such as feature extraction, pattern recognition, and target detection [2]. Several promising methods for segmentation of textured natural images have been recently proposed and reported in the literature. Of those, the ones which are based on the combination of multiple and weak segmentations of the same image to improve the quality of segmentation results are appealing from a theoretical perspective and offer an effective compromise between the complexity of the segmentation model and its efficiency.

Most of these approaches, which are used to compute the segmentation fusion result from a set of initial and weak putative segmentation maps, are theoretically based on the notion of median partition. According to a given specific criterion (which can also be expressed as a distance or a similarity index/measure between two segmentation maps), the median partition approach aims to minimize the average of the distances (or to maximize the average of similarity measures), separating the (consensus) solution from the other segmentations to be fused. To date, a large and growing number of fusion-segmentation approaches based on the result of the median partition problem, along with different criteria or different optimization strategies, have been proposed in the literature.

For example, a fusion model of weak segmentations was initially introduced in the evidence accumulation sense in [3] with a co-association matrix, and in [4], it is then based on a minimization of the inertia (or intra-cluster scatter) criterion across cluster instances (represented by the set of local re-quantized label histogram given by each input segmentation to be fused). The fusion of multiple segmentation maps has also been proposed with respect to the Rand Index (RI) criterion (or its probabilistic version), with either a stochastic constrained random walking technique [5] (within a mutual information-based estimator to assess the optimal number of regions), an algebraic optimization method [6], a Bayesian Markov random field model [7], a superpixel-based approach optimized by the expectation maximization procedure [8] or finally, according to a similarity distance function built from the adjusted RI [9] and optimized with a stochastic gradient descent. It should also be noted that the solution of the median partition problem can be determined according to an entropy criterion, either in the variation of information (VoI) sense [10], using a linear complexity and energy-based model optimized by an iterative steepest-local energy descent strategy combined with a spatial connectivity constraint, or in the mutual information sense [11] using expectation maximization (EM) optimization. The fusion of clustering results can also be carried out according to the global consistency criterion (GCE) [12] (a perceptual measure which takes into account the inherent multiscale nature of an image segmentation by measuring the level of refinement existing between two spatial partitions) or based on the precision-recall criterion [13] using a hierarchical relaxation scheme. In this context, Franek et al. [14] proposed a methodology allowing the use of virtually any ensemble clustering method to address the problem of image-segmentation combination. The strategy is mainly based on a pre-processing step which estimates a superpixel map from the segmentation ensemble in order to reduce the dimensionality of the combinatorial problem. Finally, in remote sensing, there have been reports of the combining model based on the maximum-margin sense (of the hyperplanes between spatial clusters) [15] or the recent Bayesian fusion procedure proposed in [16], in which the class labels obtained from different segmentation maps (obtained from different sensors) are fused by the weights of the evidence model.

In fact, the performance of these energy-based fusion models is related both to the optimization procedure, with its potential ability to find an optimal solution (as quickly as possible), and it also largely depends on the chosen fusion criterion, which defines all the intrinsic properties of the consensus segmentation map to be estimated. However, by assuming that an efficient optimization procedure is designed and implemented (in terms of its ability to quickly find a global optimal and stable solution), it remains unclear whether it can find the most appropriate single criterion allowing both to extract all the useful information contained in the segmentation ensemble and also to model all the complex geometric properties of the final consensus segmentation map. Another way to look at this problem is to understand that if the optimization problem is based on the optimization of a single criterion, the fusion procedure is inherently biased towards searching one particular family of possible solutions; otherwise, some specific regions of the search space contain solutions, which are a priori defined (by the criterion), as acceptable solutions. This may bias and limit the performance of an image segmentation model. To overcome this main disadvantage (the bias caused by a single criterion), we propose an interesting solution to use approaches based on multi-objective optimization in order to design a new fusion-segmentation model which takes advantage of the (potential) complementarity of different objectives (criteria), and enables us to finally obtain a better consensus segmentation result. Following this new strategy, in this work, we introduce a new multi-criteria fusion model weighted by an entropy-based confidence measure (EFA-BMFM). The main goal of this model is to simultaneously combine and optimize two different and complementary segmentation-fusion criteria, namely, the (region-based) VoI criterion and the (contour-based) F-measure (derived from the precision-recall) criterion.

The remainder of the paper is organized as follows. In Section 2, we present basic concepts of multi-objective optimization. In Section 3, we describe the generation of the segmentation ensemble to be fused by our model, while in Section 4, we describe the proposed fusion model, i.e., the used segmentation criteria, the multi-objective function and the optimization strategy of the proposed algorithm for the fusion of image segmentation. We explain the experiments and discussions in Section 5, and in Section 6, we conclude the paper.

Section snippets

Multi-objective optimization

The motivation of using multi-objective (MO) optimization comes from all the drawbacks and limitations of using a mono-objective one, as mentioned in our preliminary work [17]. As previously mentioned, the final segmentation solution is inherently biased by the chosen single criterion as well as by the parameters of the model and the possible outliers of the segmentation ensemble. A MO optimization-based segmentation fusion framework enables us to more efficiently extract the useful information

Generation of the initial segmentations

In our application, it is simple to acquire the initial segmentations (see Fig. 2) used by our fusion Framework. To do this, we employ a K-means [20] clustering technique, with an image expressed in 12 different colour spaces,1

Region-based VoI criterion

The VoI [25] is an information theoretic criterion used for comparing two segmentations (partitions) or clusterings. By measuring the amount of information which is lost or gained while switching from one clustering to another, this metric aims to quantify the information shared between two partitions. In particular, the VoI takes a value of 0 when two clusterings are identical, but ≤ 1 otherwise. Similarly, it also expresses roughly the amount of randomness in one segmentation which cannot be

Data set and benchmarks

In order to measure the performance of the proposed fusion model, we validate our approach using the famous Berkeley segmentation database (BSDS300) [37]. Recently, this dataset has been enriched to BSDS5003 [38] with 200 additional test colour images of size 481 × 321. In order to quantify the efficacy of the proposed segmentation algorithm, for each

Conclusion

In this paper, we present a new and efficient multi-criteria fusion model based on the entropy-weighted formula approach (EFA-BMFM). The proposed model combines multiple segmentation maps to achieve a final improved segmentation result. This model is based on two complementary (contour and region-based) criteria of segmentation (the VoI and the F-measure criteria). We applied the proposed segmentation model to BSDS300, BSDS500, ASD and medical images, and the proposed model appears to be

References (70)

  • T.-C. Wang et al.

    Developing a fuzzy TOPSIS approach based on subjective weights and objective weights

    Expert Syst. Appl.

    (2009)
  • J.J. Lewis et al.

    Pixel- and region-based image fusion with complex wavelets

    Inf. Fusion

    (2007)
  • S. Niu et al.

    Robust noise region-based active contour model via local similarity factor for image segmentation

    Pattern Recognit.

    (2017)
  • H. Ali et al.

    A variational model with hybrid images data fitting energies for segmentation of images with intensity inhomogeneity

    Pattern Recognit.

    (2016)
  • M. Mignotte

    MDS-based segmentation model for the fusion of contour and texture cues in natural images

    Comput. Vision Image Understanding

    (2012)
  • M. Mignotte

    A de-texturing and spatially constrained k-means approach for image segmentation

    Pattern Recognit. Lett.

    (2011)
  • L. Dong et al.

    LSI: latent semantic inference for natural image segmentation

    Pattern Recognit.

    (2016)
  • X. Liu et al.

    A spectral histogram model for texton modeling and texture discrimination

    Vision Res.

    (2002)
  • U.C. Benz et al.

    Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information

    J. Photogramm. Remote Sens.

    (2004)
  • A.L.N. Fred et al.

    Data clustering using evidence accumulation

    Proceedings of the 16th International Conference on Pattern Recognition

    (2002)
  • M. Mignotte

    Segmentation by fusion of histogram-based k-means clusters in different colour spaces

    IEEE Trans. Image Process.

    (2008)
  • P. Wattuya et al.

    A random walker based approach to combining multiple segmentations

    Proceedings of the 19th International Conference on Pattern Recognition

    (2008)
  • S. Ghosh et al.

    A general framework for reconciling multiple weak segmentations of an image

    Proceedings of the Workshop on Applications of Computer Vision

    (2009)
  • M. Mignotte

    A label field fusion Bayesian model and its penalized maximum Rand estimator for image segmentation

    IEEE Trans. Image Process.

    (2010)
  • A. Alush et al.

    Ensemble segmentation using efficient integer linear programming

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2012)
  • M. Ozay et al.

    Fusion of image segmentation algorithms using consensus clustering

    Proceedings of the 20th IEEE International Conference on Image Processing

    (2013)
  • L. Khelifi et al.

    GCE-based model for the fusion of multiples colour image segmentations

    Proceedings of the 23th IEEE International Conference on Image Processing

    (2016)
  • C. Helou et al.

    A precision-recall criterion based consensus model for fusing multiple segmentations

    Int. J. Signal Process. Image Process. Pattern Recognit.

    (2014)
  • L. Franek et al.

    Image segmentation fusion using general ensemble clustering methods

  • X. Ceamanos et al.

    A classifier ensemble based on fusion of support vector machines for classifying hyperspectral data

    Int. J. Image Data Fusion

    (2010)
  • B. Song et al.

    A novel decision fusion method based on weights of evidence model

    Int. J. Image Data Fusion

    (2014)
  • L. Khelifi et al.

    A new multi-criteria fusion model for colour textured image segmentation

    Proceedings of the 23th IEEE International Conference on Image Processing

    (2016)
  • L. Khelifi et al.

    A hybrid approach based on multi-objective simulated annealing and tabu search to solve the dynamic dial a ride problem

    Proceedings of the International Conference on Advanced Logistics and Transport

    (2013)
  • B.C. Wei et al.

    Multi-objective nature-inspired clustering techniques for image segmentation

    Proceedings of the IEEE Conference on Cybernetics and Intelligent Systems (CIS)

    (2010)
  • S. Lloyd

    Least squares quantization in pcm

    IEEE Trans. Inf. Theory

    (1982)
  • Cited by (0)

    View full text