Abstract
This paper deals with a major challenge in clustering that is optimal model selection. It presents new efficient clustering quality indexes relying on feature maximization, which is an alternative measure to usual distributional measures relying on entropy or on Chi-square metric or vector-based measures such as Euclidean distance or correlation distance. Experiments compare the behavior of these new indexes with usual cluster quality indexes based on Euclidean distance on different kinds of test datasets for which ground truth is available. This comparison clearly highlights altogether the superior accuracy and stability of the new method, its efficiency from low to high dimensional range and its tolerance to noise.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Using p-value highlighting the significance of a feature for a cluster by comparing its contrast to unity contrast would be a potential alternative to the proposed approach. However, this method would introduce unexpected Gaussian smoothing in the process.
- 2.
As regards the principle of the method, this type of selected features inevitably have a contrast greater than 1 in some other cluster(s) (see Eq. 3 for details).
- 3.
References
Angel Latha Mary, S., Sivagami, A.N., Usha Rani, M.: Cluster validity measures dynamic clustering algorithms. ARPN J. Eng. Appl. Sci. 10(9), 4009–4012 (2015)
Bache, K. and Lichman, M.: UCI machine learning repository. University of California, School of Information and Computer Science. Irvine, CA, USA (2013). http://archive.ics.uci.edu/ml
Bock, H.-H.: Probability model and hypothese testing in partitionning cluster analysis. In: Arabie, P., Hubert, L.J., De Soete, G. (eds.) Clustering and Classification, pp. 377–453. World Scientific, Singapore (1996)
Calinsky, T., Harabasz, J.: A dendrite method for cluster analysis. Commun. Stat. 3(1), 1–27 (1974)
Davies, D.L., Bouldin, D.W.: A cluster separation measure. IEEE Trans. Pattern Anal. Mach. Intell. PAMI–1(2), 224–227 (1979)
Dimitriadou, E., Dolnicar, S., Weingessel, A.: An examination of indexes for determining the number of clusters in binary data sets. Psychometrika 67(1), 137–159 (2002)
Dunn, J.: Well separated clusters and optimal fuzzy partitions. J. Cybern. 4, 95–104 (1974)
Fritzke, B.: A growing neural gas network learns topologies. In: Tesauro, G., Touretzky, D.S., Leen, T.K. (eds.) Advances in Neural Information Processing Systems 7, pp. 625–632 (1995)
Guerra, L., Robles, V., Bielza, C., Larrañaga, P.: A comparison of clustering quality indices using outliers and noise. Intell. Data Anal. 16, 703–715 (2012)
Gordon, A.D.: External validation in cluster analysis. Bull. Int. Stat. Inst. 51(2), 353–356 (1997). Response to comments. Bull. Int. Stat. Inst. 51(3), 414–415 (1998)
Halkidi, M., Batistakis, Y., Vazirgiannis, M.: On clustering validation techniques. J. Intell. Inf. Syst. 17(2/3), 147–155 (2001)
Hamerly, G., Elkan, C.: Learning the K in K-Means. In: Neural Information Processing Systems (2003)
Kassab, R., Lamirel, J.-C.: Feature based cluster validation for high dimensional data. In: IASTED International Conference on Artificial Intelligence and Applications (AIA), Innsbruck, Austria, pp. 97–103, February 2008
Lago-Fernández, L.F., Corbacho, F.: Using the negentropy increment to determine the number of clusters. In: Cabestany, J., Sandoval, F., Prieto, A., Corchado, J.M. (eds.) IWANN 2009, Part I. LNCS, vol. 5517, pp. 448–455. Springer, Heidelberg (2009)
Lamirel, J.-C., Francois, C., Al Shehabi, S., Hoffmann, M.: New classification quality estimators for analysis of documentary information: application topatent analysis and web mapping. Scientometrics 60(3), 445–462 (2004)
Lamirel, J.-C., Mall, R., Cuxac, P., Safi, G.: Variations to incremental growing neural gas algorithm based on label maximization. In: Proceedings of IJCNN 2011, San Jose, CA, USA, pp. 956–965 (2011)
Lamirel, J.-C., Cuxac, P., Chivukula, A.S., Hajlaoui, K.: Optimizing text classification through efficient feature selection based on quality metric. J. Intell. Inf. Syst. 45(3), 379–396 (2014). Special Issue on PAKDD-QIMIE 2013, 1–18
MacQueen, J.B.: Some methods for classification and analysis of multivariate observations. In: Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 281–297. University of California Press (1967)
Milligan, G.W., Cooper, M.C.: An examination of procedures for determining the number of clusters in a dataset. Psychometrika 50(2), 159–179 (1985)
Rendón, E., Abundez, I., Arizmendi, A., Quiroz, E.M.: Internal versus external cluster validation indexes. Intern. J. Comput. Commun. 5(1), 27–34 (2011)
Rousseeuw, P.J.: Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 20, 53–65 (1987)
Sun, L., Korhonen, A., Poibeau, T., Messiant, C.: Investigating the cross-linguistic potential of VerbNet-style classification. In: Proceedings of ACL, Beijing, China, pp. 1056–1064 (2010)
Yanchi, L., Zhongmou, L., Xiong, H., Gao, X., Wu, J.: Understanding of internal clustering validation measures. In: Proceedings of the 2010 IEEE International Conference on Data Mining, ICDM 2010, pp. 911–916 (2010)
Xie, X.L., Beni, G.: A validity measure for fuzzy clustering. IEEE Trans. Pattern Anal. Mach. Intell. 13(8), 841–847 (1991)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Lamirel, JC. (2016). Reliable Clustering Indexes. In: Fujita, H., Ali, M., Selamat, A., Sasaki, J., Kurematsu, M. (eds) Trends in Applied Knowledge-Based Systems and Data Science. IEA/AIE 2016. Lecture Notes in Computer Science(), vol 9799. Springer, Cham. https://doi.org/10.1007/978-3-319-42007-3_10
Download citation
DOI: https://doi.org/10.1007/978-3-319-42007-3_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-42006-6
Online ISBN: 978-3-319-42007-3
eBook Packages: Computer ScienceComputer Science (R0)