Abstract
K-means is a clustering method with an interpretable mechanism. However, its clustering results are significantly affected by the location of the initial cluster centers. More importantly, for it and its improved versions, it is extremely hard to adaptively determine the number of cluster centers. In contrast, ordinary neural networks have powerful information representation ability but lack interpretability. Moreover, to the best of our knowledge, the use of interpretable neural networks to determine the number of cluster centers of K-means is absent. This paper proposes K-meaNet that combines the interpretable mechanism of K-means and the powerful information representation ability of neural networks. For the neural network in K-meaNet, its inputs, weights, and mathematical expressions of each layer have clear meanings. During training, if one cluster center is critical, the value of one of the weights in the neural network, the gate, corresponding to this cluster center will increase. At the same time, the position of this cluster center will be close to the ideal cluster center. Besides, the location of the cluster center(s) and the value(s) of the corresponding gate(s) will not change significantly. This endows K-meaNet with the ability to adaptively determine the location and number of cluster centers compared with K-means and its improved versions. Moreover, this adaptive ability is robust to the location of the initial cluster centers, the number of the initial cluster centers, and the number of features. On six synthetic datasets and three real datasets, numerical experiments verify that K-meaNet can adaptively determine the number of cluster centers and is robust to the location of the initial cluster centers, the number of the initial cluster centers, and the number of features.
Similar content being viewed by others
Data availability
All data generated or analyzed during this study are included in this published paper.
References
Ahmed M, Seraj R, Islam SMS (2020) The k-means algorithm: A comprehensive survey and performance evaluation. Electronics 9:1295
“Hierarchical clustering" (2023) https://www.mathworks.com/help/stats/clusterdata.html
Ester M, Kriegel H. P, Sander J, Xu X (1996)“A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the 2nd KDD. AAAI Press
Dong S, Xia Y, Peng T (2021) Network abnormal traffic detection model based on semi-supervised deep reinforcement learning. IEEE Trans Netw Serv Manag 18(4):4197–4212
Wang H, Cheng R, Zhou J, Tao L, Kwan HK (2022) Multistage model for robust face alignment using deep neural networks. Cogn Comput 14:1123–1139
Li F, Gao D, Yang Y, Zhu J (2023) Small target deep convolution recognition algorithm based on improved YOLOv4. Int J Mach Learn Cybern 14:387–394
Zhang Y, Mańdziuk J, Quek CH, Goh BW (2017) Curvature-based method for determining the number of clusters. Inf Sci 415–416:414–428
Liu Q, Wu H, Xu Z (2021) Consensus model based on probability K-means clustering algorithm for large scale group decision making. Int J Mach Learn Cybern 12:1609–1626
Biswas TK, Giri K, Roy S (2023) ECKM: An improved K-means clustering based on computational geometry. Expert Syst Appl 212:118862
Hu H, Liu J, Zhang X, Fang M (2023) An effective and adaptable K-means algorithm for big data cluster analysis. Pattern Recognit 139:109404
Liu L, Li P, Chu M, Liu S (2023) Robust nonparallel support vector machine with privileged information for pattern recognition. Int J Mach Learn Cybern 14:1465–1482
Tanveer M, Gupta T, Shah M, Richhariya B (2021) Sparse twin support vector clustering using pinball loss. IEEE J Biomed Health Inf 25(10):3776–3783
Tanveer M, Gupta T, Shah M (2021) Pinball loss twin support vector clustering. ACM Trans Multimed Comput Commun Appl 17(2s):1–23
Tanveer M, Tabish M, Jangir J (2022) Sparse pinball twin bounded support vector clustering. IEEE Trans Comput Soc Syst 9(6):1820–1829
Demuth HB, Beale MH, De Jésus O, Hagan MT (2014) Neural network design. Martin Hagan, Stillwater, Oklahoma, USA
Larochelle H, Bengio Y, Louradour J, Lamblin P (2009) Exploring strategies for training deep neural networks. J Mach Learn Res 10(1):1–40
Xie X, Li Z, Pu YF, Wang J, Zhang W, Wen Y (2023) A fractional filter based on reinforcement learning for effective tracking under impulsive noise. Neurocomputing 516:155–168
Liu S, Huang S, Fu W, Lin JCW (2023) A descriptive human visual cognitive strategy using graph neural network for facial expression recognition. Int J Mach Learn Cybern. https://doi.org/10.1007/s13042-022-01681-w
Jain DK, Ding W, Kotecha K (2023) Training fuzzy deep neural network with honey badger algorithm for intrusion detection in cloud environment. Int J Mach Learn Cybern. https://doi.org/10.1007/s13042-022-01758-6
Caron M, Bojanowski P, Joulin A, Douze M (2018) Deep clustering for unsupervised learning of visual features. In: European conference on computer vision
Dang Z, Deng C, Yang X, Wei K, Huang H (2021) Nearest neighbor matching for deep clustering. In: IEEE/CVF conference on computer vision and pattern recognition
Xu J, Ren Y, Li G, Pan L, Zhu C, Xu Z (2021) Deep embedded multi-view clustering with collaborative training. Inf Sci 573:279–290
Özgül OF, Bardak B, Tan M (2021) A convolutional deep clustering framework for gene expression time series. IEEE ACM Trans Comput Biol Bioinform 18(6):2198–2207
Cai J, Fan J, Guo W, Wang S, Zhang Y, Zhang Z (2022) Efficient deep embedded subspace clustering. In: IEEE/CVF conference on computer vision and pattern recognition
Cai J, Wang S, Xu C, Guo W (2022) Unsupervised deep clustering via contractive feature representation and focal loss. Pattern Recognit 123:108386
Li S, Yuan M, Chen J, Hu Z (2022) AdaDC: adaptive deep clustering for unsupervised domain adaptation in person re-identification. IEEE Trans Circuits Syst Video Technol 32(6):3825–3838
Wang J, Wu B, Ren Z, Zhang H, Zhou Y (2023) Multi-scale deep multi-view subspace clustering with self-weighting fusion and structure preserving. Expert Syst Appl 213:119031
Wang Y, Chang D, Fu Z, Zhao Y (2023) Learning a bi-directional discriminative representation for deep clustering. Pattern Recogn 137:109237
Wang T, Zhang X, Lan L, Luo Z (2023) Local-to-global deep clustering on approximate Uniform manifold. IEEE Trans Knowl Data Eng 35(5):5035–5046
Liu Y et al (2023) Dink-net: neural clustering on large graphs. arXiv:2305.18405v3 [cs.LG]
Ding F, Zhang D, Yang Y, Krovi V, Luo F (2023) Contrastive representation Disentanglement for Clustering. arXiv:2306.05439v2 [cs.LG]
Castelvecchi D (2016) Can we open the black box of AI? Nat News 538(7623):20
Tang Z et al (2019) Interpretable classification of Alzheimer’s disease pathologies with a convolutional neural network pipeline. Nat Commun 10(1):1–14
Samek W, Montavon G, Lapuschkin S, Anders CJ, Müller KR (2021) Explaining deep neural networks and beyond: a review of methods and applications. Proc IEEE 109(3):247–278
Peng X, Li Y, Tsang IW, Zhu H, Lv J, Zhou JT (2022) XAI beyond classification: interpretable neural clustering. J Mach Learn Res 23(6):1–28
Yu L, Zhang Z, Xie X, Chen H, Wang J (2019) Unsupervised feature selection using RBF autoencoder. Int Symp Neural Netw 11554:48–57
Ma L, Wang X, Zhou Y (2022) Observer and command-filter-based adaptive neural network control algorithms for nonlinear multi-agent systems with input delay. Cogn Comput 14:814–827
Wang K, Yan C, Yuan X, Wang Y, Liu C (2022) A reduced nonstationary discrete convolution kernel for multimode process monitoring. Int J Mach Learn Cybern 13:3711–3725
Gao T, Zhang Z, Chang Q, Xie X, Ren P, Wang J (2019) Conjugate gradient-based Takagi–Sugeno fuzzy neural network parameter identification and its convergence analysis. Neurocomputing 364:168–181
Wang J, Chang Q, Gao T, Zhang K, Pal NR (2022) Sensitivity analysis of Takagi–Sugeno fuzzy neural network. Inf Sci 582:725–749
Xue G, Chang Q, Wang J, Zhang K, Pal NR (2023) An adaptive neuro-fuzzy system with integrated feature selection and rule extraction for high-dimensional classification problems. IEEE Trans Fuzzy Syst. https://doi.org/10.1109/TFUZZ.2022.3220950
Xue G, Wang J, Yuan B, Dai C (2023) DG-ALETSK: a high-dimensional fuzzy approach with simultaneous feature selection and rule extraction. IEEE Trans Fuzzy Syst. https://doi.org/10.1109/TFUZZ.2023.3270445
Xie X, Zhang H, Wang J, Chang Q, Wang J, Pal NR (2020) Learning optimized structure of neural networks by hidden node pruning with \(L_ {1}\) regularization. IEEE Trans Cybern 50(3):1333–1346
Dau HA et al (2019) The UCR time series archive. IEEE CAA J Autom Sin 6(6):1293–1305
UCI Machine Learning Repository, School Inf. Comput. Sci., Univ. California, at Irvine, CA, USA, Accessed: 2023. [Online]. https://archive-beta.ics.uci.edu/
Park HS, Jun CH (2009) A simple and fast algorithm for K-medoids clustering. Expert Syst Appl 36:3336–3341
Funding
This work was supported in part by the National Key R &D Program of China under Grant 2019YFA0708700; in part by the National Natural Science Foundation of China under Grant 62173345; and in part by the Fundamental Research Funds for the Central Universities under Grant 20CX05002A, 22CX03002A; in part by the Joint Education Project for Universities in CEE Countries and China under Grant 2022151.
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Ethics approval
This paper does not contain any experiments with human or animal participants performed by any of the authors.
Informed consent
Informed consent was obtained from all individual participants included in the study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix 1: Visualization comparisons of the clustering results on the KN2 (synthetic) dataset when the number of the initial cluster centers is 20, 30, 40, and 50
Appendix 2: Non-visualization comparisons of the clustering results on the KN500 (synthetic), KN1000 (synthetic), KN2000 (synthetic), KN4000 (synthetic), Vehicle (real), and Wine (real) datasets
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Xie, X., Pu, YF., Zhang, H. et al. An interpretable neural network for robustly determining the location and number of cluster centers. Int. J. Mach. Learn. & Cyber. 15, 1473–1501 (2024). https://doi.org/10.1007/s13042-023-01978-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13042-023-01978-4