Skip to main content

Minimum Information Loss Cluster Analysis for Categorical Data

  • Conference paper
Machine Learning and Data Mining in Pattern Recognition (MLDM 2007)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4571))

  • 3645 Accesses

Abstract

The EM algorithm has been used repeatedly to identify latent classes in categorical data by estimating finite distribution mixtures of product components. Unfortunately, the underlying mixtures are not uniquely identifiable and, moreover, the estimated mixture parameters are starting-point dependent. For this reason we use the latent class model only to define a set of “elementary” classes by estimating a mixture of a large number components. We propose a hierarchical “bottom up” cluster analysis based on unifying the elementary latent classes sequentially. The clustering procedure is controlled by minimum information loss criterion.

This research was supported by the grant GACR 102/07/1594 of the Czech Grant Agency and by the projects of the Grant Agency of MŠMT 2C06019 ZIMOLEZ and 1M0572 DAR.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Blischke, W.R.: Estimating the parameters of mixtures of binomial distributions. Journal Amer. Statist. Assoc. 59, 510–528 (1964)

    Article  MATH  MathSciNet  Google Scholar 

  2. Carreira-Perpignan, M.A., Renals, S.: Practical identifiability of finite mixtures of multivariate Bernoulli distributions. Neural Computation 12, 141–152 (2000)

    Article  Google Scholar 

  3. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Statist. Soc. B 39, 38 (1977)

    MathSciNet  Google Scholar 

  4. Grim, J., Haindl, M.: Texture Modelling by Discrete Distribution Mixtures. Computational Statistics and Data Analysis 41(3-4), 603–615 (2003)

    Article  MathSciNet  Google Scholar 

  5. Grim, J., Kittler, J., Pudil, P., Somol, P.: Multiple classifier fusion in probabilistic neural networks. Pattern Analysis & Applications 5(7), 221–233 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  6. Grim, J., Somol, P., Haindl, M., Pudil, P.: A statistical approach to local evaluation of a single texture image. In: Proceedings of the Sixteenth Annual Symposium of the Pattern Recognition Association of South Africa, PRASA 2005. Nicolls, F. (ed.). University of Cape Town, Cape Town 2005, pp. 171–176 (2005)

    Google Scholar 

  7. Grim, J.: EM cluster analysis for categorical data. In: Yeung, D.-Y., Kwok, J.T., Fred, A., Roli, F., de Ridder, D. (eds.) Structural, Syntactic, and Statistical Pattern Recognition. LNCS, vol. 4109, pp. 640–648. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  8. Gyllenberg, M., Koski, T., Reilink, E., Verlaan, M.: Non-uniqueness in probabilistic numerical identification of bacteria. Journal of Applied Prob. 31, 542–548 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  9. McLachlan, G.J., Peel, D.: Finite Mixture Models. John Wiley & Sons, New York, Toronto (2000)

    MATH  Google Scholar 

  10. Lazarsfeld, P.F., Henry, N.W.: Latent structure analysis. Houghton Miflin, Boston (1968)

    MATH  Google Scholar 

  11. Pearl, J.: Probabilistic reasoning in intelligence systems: networks of plausible inference. Morgan-Kaufman, San Mateo, CA (1988)

    Google Scholar 

  12. Suppes, P.A.: Probabilistic theory of causality. North-Holland, Amsterdam (1970)

    Google Scholar 

  13. Teicher, H.: Identifiability of mixtures of product measures. Ann. Math. Statist. 39, 1300–1302 (1968)

    MathSciNet  Google Scholar 

  14. Vermunt, J.K., Magidson, J.: Latent Class Cluster Analysis. In: Hagenaars, J.A., McCutcheon, A.L. (eds.) Advances in Latent Class Analysis, Cambridge University Press, Cambridge (2002)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Petra Perner

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Grim, J., Hora, J. (2007). Minimum Information Loss Cluster Analysis for Categorical Data. In: Perner, P. (eds) Machine Learning and Data Mining in Pattern Recognition. MLDM 2007. Lecture Notes in Computer Science(), vol 4571. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-73499-4_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-73499-4_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-73498-7

  • Online ISBN: 978-3-540-73499-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics