Abstract
This paper presents a new clustering technique named STep- wise Automatic Rival- penalized (STAR) k-means algorithm (denoted as k*-means), which is actually a generalized version of the conventional k-means (MacQueen 1967). Not only is this new algorithm applicable to ellipse-shaped data clusters rather than just to ball-shaped ones like the k-means algorithm, but also it can perform appropriate clustering without knowing cluster number by gradually penalizing the winning chance of those extra seed points during learning competition. Although the existing RPCL (Xu et al. 1993) can automatically select the cluster number as well by driving extra seed points far away from the input data set, its performance is much sensitive to the selection of the de-learning rate. To our best knowledge, there is still no theoretical result to guide its selection as yet. In contrast, the proposed k*-means algorithm need not determine this rate. We have qualitatively analyzed its rival-penalized mechanism with the results well-justified by the experiments.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
S.C. Ahalt, A.K. Krishnamurty, P. Chen, and D.E. Melton, “Competitive Learning Algorithms for Vector Quantization”, Neural Networks, Vol. 3, pp. 277–291, 1990.
H. Akaike, “Information Theory and An Extension of the Maximum Likelihood Principle”, Proceedings of Second International Symposium on Information Theory, pp. 267–281, 1973.
H. Akaike, “A New Look at the Statistical Model Identfication”, IEEE Transactions on Automatic Control AC-19, pp. 716–723, 1974.
H. Bozdogan, “Model Selection and Akaike’s Information Criterion: The General Theory and its Analytical Extensions”, Psychometrika, Vol. 52, No. 3, pp. 345–370, 1987.
J.B. MacQueen, “Some Methods for Classification and Analysis of Multivariate Observations”, Proceedings of 5nd Berkeley Symposium on Mathematical Statistics and Probability, 1, Berkeley, Calif.: University of California Press, pp. 281–297, 1967.
G. Schwarz, “Estimating the Dimension of a Model”, The Annals of Statistics, Vol. 6, No. 2, pp. 461–464, 1978.
L. Xu, “How Many Clusters?: A Ying-Yang Machine Based Theory for A Classical Open Problem in Pattern Recognition”, Proceedings of IEEE International Conference on Neural Networks, Vol. 3, pp. 1546–1551, 1996.
L. Xu, “Bayesian Ying-Yang Machine, Clustering and Number of Clusters”, Pattern Recognition Letters, Vol. 18, No. 11-13, pp. 1167–1178, 1997.
L. Xu, A. Krzyÿzak and E. Oja, “Rival Penalized Competitive Learning for Clustering Analysis, RBF Net, and Curve Detection”, IEEE Transaction on Neural Networks, Vol. 4, pp. 636–648, 1993. Its preliminary version was appeared in Proceedings of 1992 International Joint Conference on Neural Networks, Vol. 2, pp. 665-670, 1992.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Cheung, Ym. (2002). k*-Means — A Generalized k-Means Clustering Algorithm with Unknown Cluster Number. In: Yin, H., Allinson, N., Freeman, R., Keane, J., Hubbard, S. (eds) Intelligent Data Engineering and Automated Learning — IDEAL 2002. IDEAL 2002. Lecture Notes in Computer Science, vol 2412. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45675-9_48
Download citation
DOI: https://doi.org/10.1007/3-540-45675-9_48
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-44025-3
Online ISBN: 978-3-540-45675-9
eBook Packages: Springer Book Archive