Abstract
In Gaussian mixture modeling, it is crucial to select the number of Gaussians or mixture model for a sample data set. Under regularization theory, we aim to solve this kind of model selection problem through implementing entropy regularized likelihood (ERL) learning on Gaussian mixture via a batch gradient learning algorithm. It is demonstrated by the simulation experiments that this gradient ERL learning algorithm can select an appropriate number of Gaussians automatically during the parameter learning on a sample data set and lead to a good estimation of the parameters in the actual Gaussian mixture, even in the cases of two or more actual Gaussians overlapped strongly. We further give an adaptive gradient implementation of the ERL learning on Gaussian mixture followed with theoretic analysis, and find a mechanism of generalized competitive learning implied in the ERL learning.
Similar content being viewed by others
References
Devijver P.A., Kittter J. (1982). Pattern Recognition: A Statistical Approach. Prentice Hall, Englewood Cliffs, NJ
Render R.A., Walker H.F. (1984). Mixture densities, maximum likelihood and the EM algorithm. SIAM Review 26(2):195–239
Akaike H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control 19(6):716–723
Schwarz G. (1978). Estimating the dimension of a model. The Annals of Statistics 6(2):461–464
Xu L., Krzyzak A., Oja E. (1993). Rival penalized competitive learning for clustering analysis, RBF net, and curve detection. IEEE Transactions on Neural networks 4(4):636–648
Xu L. (2002). BYY harmony learning, structural RPCL, and topological self-organzing on mixture modes. Neural Networks 15(8–9): 1231–1237
Lu Z., Cheng Q., Ma J. (2005). A gradient BYY harmony learning algorithm on mixture of experts for curve detection, Lecture Notes in Computer Science 3578:250–257
Ma J., Wang T., Xu L. (2004). A gradient BYY harmony learning rule on Gaussian mixture with automated model selection. Neurocomputing 56:481–487
Dennis D.C., Finbarr O.S. (1990). Asymptotic analysis of penalized likelihood and related estimators. The Annals of Statistics 18(6): 1676–1695
Vapnik V.N. (1999). An overview of statistical learning theory. IEEE Transactions on Neural Networks 10(5):988–999
Lu Z. (2006). An iterative algorithm for entropy regularized likelihood learning on Gaussian mixture with automatic model selection. Neurocomputing 69(13–15):1674–1677
Lu Z. (2006). Unsupervised image segmentation using an iterative entropy regularized likelihood learning algorithm. Lecture Notes in Computer Science 3972:492–497
Ma J., Fu S. (2005). On the correct convergence of the EM algorithm for Gaussian mixtures. Pattern Recognition 38(12):2602–2611
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Lu, Z. Entropy Regularized Likelihood Learning on Gaussian Mixture: Two Gradient Implementations for Automatic Model Selection. Neural Process Lett 25, 17–30 (2007). https://doi.org/10.1007/s11063-006-9028-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-006-9028-3