Loading [a11y]/accessibility-menu.js
Learning Gaussian Mixture Models With Entropy-Based Criteria | IEEE Journals & Magazine | IEEE Xplore

Learning Gaussian Mixture Models With Entropy-Based Criteria


Abstract:

In this paper, we address the problem of estimating the parameters of Gaussian mixture models. Although the expectation–maximization (EM) algorithm yields the maximum-li...Show More

Abstract:

In this paper, we address the problem of estimating the parameters of Gaussian mixture models. Although the expectation–maximization (EM) algorithm yields the maximum-likelihood (ML) solution, its sensitivity to the selection of the starting parameters is well-known and it may converge to the boundary of the parameter space. Furthermore, the resulting mixture depends on the number of selected components, but the optimal number of kernels may be unknown beforehand. We introduce the use of the entropy of the probability density function (pdf) associated to each kernel to measure the quality of a given mixture model with a fixed number of kernels. We propose two methods to approximate the entropy of each kernel and a modification of the classical EM algorithm in order to find the optimum number of components of the mixture. Moreover, we use two stopping criteria: a novel global mixture entropy-based criterion called Gaussianity deficiency (GD) and a minimum description length (MDL) principle-based one. Our algorithm, called entropy-based EM (EBEM), starts with a unique kernel and performs only splitting by selecting the worst kernel attending to GD. We have successfully tested it in probability density estimation, pattern classification, and color image segmentation. Experimental results improve the ones of other state-of-the-art model order selection methods.
Published in: IEEE Transactions on Neural Networks ( Volume: 20, Issue: 11, November 2009)
Page(s): 1756 - 1771
Date of Publication: 18 September 2009

ISSN Information:

PubMed ID: 19770090

Contact IEEE to Subscribe

References

References is not available for this document.