Skip to main content

Incremental Learning of Multivariate Gaussian Mixture Models

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6404))

Abstract

This paper presents a new algorithm for unsupervised incremental learning based on a Bayesian framework. The algorithm, called IGMM (for Incremental Gaussian Mixture Model), creates and continually adjusts a Gaussian Mixture Model consistent to all sequentially presented data. IGMM is particularly useful for on-line incremental clustering of data streams, as encountered in the domain of mobile robotics and animats. It creates an incremental knowledge model of the domain consisting of primitive concepts involving all observed variables. We present some preliminary results obtained using synthetic data and also consider practical issues as convergence properties discuss future developments.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Arandjelovic, O., Cipolla, R.: Incremental learning of temporally-coherent gaussian mixture models. In: Proc. 16th British Machine Vision Conf. (BMVC), Oxford, UK, pp. 759–768 (2005)

    Google Scholar 

  2. Kristan, M., Skocaj, D., Leonardis, A.: Incremental learning with gaussian mixture models. In: Proc. Computer Vision Winter Workshop, Moravske Toplice, Slovenia, pp. 25–32 (2008)

    Google Scholar 

  3. Fisher, D.H.: Knowledge acquisition via incremental conceptual learning. Machine Learning 2, 139–172 (1987)

    Google Scholar 

  4. Gennari, J.H., Langley, P., Fisher, D.: Models of incremental concept formation. Artificial Intelligence 40, 11–61 (1989)

    Article  Google Scholar 

  5. MacQueen, J.B.: Some methods for classification and analysis of multivariate observations. In: Proc. 5th Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 281–297. Univ. California Press, Berkeley (1967)

    Google Scholar 

  6. Tan, P.N., Steinbach, M., Kumar, V.: Introduction to Data Mining. Addison-Wesley, Boston (2006)

    Google Scholar 

  7. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society 39(1), 1–38 (1977)

    MATH  Google Scholar 

  8. Robbins, H., Monro, S.: A stochastic approximation method. Annals of Mathematical Statistics 22, 400–407 (1951)

    Article  MATH  Google Scholar 

  9. Titterington, D.M.: Recursive parameter estimation using incomplete data. Journal of the Royal Statistical Society 46(2), 257–267 (1984)

    MATH  Google Scholar 

  10. Wang, S., Zhao, Y.: Almost sure convergence of titterington’s recursive estimator for mixture models. Statistics & Probability Letters (76) (2001–2006)

    Google Scholar 

  11. Neal, R.M., Hinton, G.E.: A view of the EM algorithm that justifies incremental, sparse, and other variants. In: Learning in Graphical Models, pp. 355–368. Kluwer Academic Publishers, Dordrecht (1998)

    Chapter  Google Scholar 

  12. Sato, M.A., Ishii, S.: On-line EM algorithm for the normalized gaussian network. Neural Computation 12(2), 407–432 (2000)

    Article  Google Scholar 

  13. Cappé, O., Moulines, E.: Recursive EM algorithm with applications to DOA estimation. In: Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, Toulouse, France (2006)

    Google Scholar 

  14. Cappé, O., Moulines, E.: Online EM algorithm for latent data models. Journal of the Royal Statistical Society (2008)

    Google Scholar 

  15. Keehn, D.G.: A note on learning for gaussian proprieties. IEEE Trans. Information Theory 11, 126–132 (1965)

    Article  Google Scholar 

  16. Fukunaga, K.: Introduction to Statistical Pattern Recognition, 2nd edn. Academic Press, London (1990)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Engel, P.M., Heinen, M.R. (2010). Incremental Learning of Multivariate Gaussian Mixture Models. In: da Rocha Costa, A.C., Vicari, R.M., Tonidandel, F. (eds) Advances in Artificial Intelligence – SBIA 2010. SBIA 2010. Lecture Notes in Computer Science(), vol 6404. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-16138-4_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-16138-4_9

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-16137-7

  • Online ISBN: 978-3-642-16138-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics