Abstract
The EM algorithm has been used repeatedly to identify latent classes in categorical data by estimating finite distribution mixtures of product components. Unfortunately, the underlying mixtures are not uniquely identifiable and, moreover, the estimated mixture parameters are starting-point dependent. For this reason we use the latent class model only to define a set of “elementary” classes by estimating a mixture of a large number components. We propose a hierarchical “bottom up” cluster analysis based on unifying the elementary latent classes sequentially. The clustering procedure is controlled by minimum information loss criterion.
This research was supported by the grant GACR 102/07/1594 of the Czech Grant Agency and by the projects of the Grant Agency of MŠMT 2C06019 ZIMOLEZ and 1M0572 DAR.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Blischke, W.R.: Estimating the parameters of mixtures of binomial distributions. Journal Amer. Statist. Assoc. 59, 510–528 (1964)
Carreira-Perpignan, M.A., Renals, S.: Practical identifiability of finite mixtures of multivariate Bernoulli distributions. Neural Computation 12, 141–152 (2000)
Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Statist. Soc. B 39, 38 (1977)
Grim, J., Haindl, M.: Texture Modelling by Discrete Distribution Mixtures. Computational Statistics and Data Analysis 41(3-4), 603–615 (2003)
Grim, J., Kittler, J., Pudil, P., Somol, P.: Multiple classifier fusion in probabilistic neural networks. Pattern Analysis & Applications 5(7), 221–233 (2002)
Grim, J., Somol, P., Haindl, M., Pudil, P.: A statistical approach to local evaluation of a single texture image. In: Proceedings of the Sixteenth Annual Symposium of the Pattern Recognition Association of South Africa, PRASA 2005. Nicolls, F. (ed.). University of Cape Town, Cape Town 2005, pp. 171–176 (2005)
Grim, J.: EM cluster analysis for categorical data. In: Yeung, D.-Y., Kwok, J.T., Fred, A., Roli, F., de Ridder, D. (eds.) Structural, Syntactic, and Statistical Pattern Recognition. LNCS, vol. 4109, pp. 640–648. Springer, Heidelberg (2006)
Gyllenberg, M., Koski, T., Reilink, E., Verlaan, M.: Non-uniqueness in probabilistic numerical identification of bacteria. Journal of Applied Prob. 31, 542–548 (1994)
McLachlan, G.J., Peel, D.: Finite Mixture Models. John Wiley & Sons, New York, Toronto (2000)
Lazarsfeld, P.F., Henry, N.W.: Latent structure analysis. Houghton Miflin, Boston (1968)
Pearl, J.: Probabilistic reasoning in intelligence systems: networks of plausible inference. Morgan-Kaufman, San Mateo, CA (1988)
Suppes, P.A.: Probabilistic theory of causality. North-Holland, Amsterdam (1970)
Teicher, H.: Identifiability of mixtures of product measures. Ann. Math. Statist. 39, 1300–1302 (1968)
Vermunt, J.K., Magidson, J.: Latent Class Cluster Analysis. In: Hagenaars, J.A., McCutcheon, A.L. (eds.) Advances in Latent Class Analysis, Cambridge University Press, Cambridge (2002)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Grim, J., Hora, J. (2007). Minimum Information Loss Cluster Analysis for Categorical Data. In: Perner, P. (eds) Machine Learning and Data Mining in Pattern Recognition. MLDM 2007. Lecture Notes in Computer Science(), vol 4571. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-73499-4_18
Download citation
DOI: https://doi.org/10.1007/978-3-540-73499-4_18
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-73498-7
Online ISBN: 978-3-540-73499-4
eBook Packages: Computer ScienceComputer Science (R0)