Skip to main content
Log in

Entropy Regularized Likelihood Learning on Gaussian Mixture: Two Gradient Implementations for Automatic Model Selection

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In Gaussian mixture modeling, it is crucial to select the number of Gaussians or mixture model for a sample data set. Under regularization theory, we aim to solve this kind of model selection problem through implementing entropy regularized likelihood (ERL) learning on Gaussian mixture via a batch gradient learning algorithm. It is demonstrated by the simulation experiments that this gradient ERL learning algorithm can select an appropriate number of Gaussians automatically during the parameter learning on a sample data set and lead to a good estimation of the parameters in the actual Gaussian mixture, even in the cases of two or more actual Gaussians overlapped strongly. We further give an adaptive gradient implementation of the ERL learning on Gaussian mixture followed with theoretic analysis, and find a mechanism of generalized competitive learning implied in the ERL learning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Devijver P.A., Kittter J. (1982). Pattern Recognition: A Statistical Approach. Prentice Hall, Englewood Cliffs, NJ

    MATH  Google Scholar 

  2. Render R.A., Walker H.F. (1984). Mixture densities, maximum likelihood and the EM algorithm. SIAM Review 26(2):195–239

    Article  MathSciNet  Google Scholar 

  3. Akaike H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control 19(6):716–723

    Article  MATH  ADS  MathSciNet  Google Scholar 

  4. Schwarz G. (1978). Estimating the dimension of a model. The Annals of Statistics 6(2):461–464

    MATH  MathSciNet  Google Scholar 

  5. Xu L., Krzyzak A., Oja E. (1993). Rival penalized competitive learning for clustering analysis, RBF net, and curve detection. IEEE Transactions on Neural networks 4(4):636–648

    Article  Google Scholar 

  6. Xu L. (2002). BYY harmony learning, structural RPCL, and topological self-organzing on mixture modes. Neural Networks 15(8–9): 1231–1237

    Google Scholar 

  7. Lu Z., Cheng Q., Ma J. (2005). A gradient BYY harmony learning algorithm on mixture of experts for curve detection, Lecture Notes in Computer Science 3578:250–257

    Google Scholar 

  8. Ma J., Wang T., Xu L. (2004). A gradient BYY harmony learning rule on Gaussian mixture with automated model selection. Neurocomputing 56:481–487

    Article  Google Scholar 

  9. Dennis D.C., Finbarr O.S. (1990). Asymptotic analysis of penalized likelihood and related estimators. The Annals of Statistics 18(6): 1676–1695

    MATH  MathSciNet  Google Scholar 

  10. Vapnik V.N. (1999). An overview of statistical learning theory. IEEE Transactions on Neural Networks 10(5):988–999

    Article  Google Scholar 

  11. Lu Z. (2006). An iterative algorithm for entropy regularized likelihood learning on Gaussian mixture with automatic model selection. Neurocomputing 69(13–15):1674–1677

    Article  Google Scholar 

  12. Lu Z. (2006). Unsupervised image segmentation using an iterative entropy regularized likelihood learning algorithm. Lecture Notes in Computer Science 3972:492–497

    Article  Google Scholar 

  13. Ma J., Fu S. (2005). On the correct convergence of the EM algorithm for Gaussian mixtures. Pattern Recognition 38(12):2602–2611

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhiwu Lu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lu, Z. Entropy Regularized Likelihood Learning on Gaussian Mixture: Two Gradient Implementations for Automatic Model Selection. Neural Process Lett 25, 17–30 (2007). https://doi.org/10.1007/s11063-006-9028-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-006-9028-3

Keywords

Navigation