Abstract
AutoML has witnessed effective applications in the field of supervised learning – mainly in classification tasks – where the goal is to find the best machine-learning pipeline when a ground truth is available. This is not the case for unsupervised tasks that are by nature exploratory and they are performed to unveil hidden insights. Since there is no right result, analyzing different configurations is more important than returning the best-performing one. When it comes to exploratory unsupervised tasks – such as cluster analysis – different facets of the datasets could be interesting for the data scientist; for instance, data items can be effectively grouped together in different subspaces of features. In this paper, AutoClues explores and returns a dashboard of both relevant and diverse clusterings via AutoML and diversification. AutoML ensures that the explored pipelines for cluster analysis (including pre-processing steps) compute good clusterings. Then, diversification selects, out of the explored clusterings, the ones conveying different clues to the data scientists.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
This dimensionality reduction visualizes high-dimensional clusterings in 2D, preserving distance proportions. We apply it with the default Scikit-learn hyperparameters.
- 2.
If an algorithm has no hyperparameters (\(\varLambda _{A} = \varnothing \)), we set a placeholder \(\varLambda _{A} = \{ 1 \}\).
- 3.
- 4.
In statistics, it serves as a baseline for assessing the significance in random variations.
- 5.
We use the default hyperparameter \(\beta = 0.5\), and set \(\alpha \) according to the test at hand.
- 6.
Metrics are computed on the original dataset (i.e., no t-SNE distortion).
References
Arthur, D., Vassilvitskii, S.: k-means++: The advantages of careful seeding. Technical report, Stanford (2006)
Barlow, H.B.: Unsupervised learning. Neural Comput. 1(3), 295–311 (1989)
Breunig, M.M., Kriegel, H.P., Ng, R.T., Sander, J.: LoF: identifying density-based local outliers. In: Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data. , pp. 93–104 (2000)
Davies, D.L., Bouldin, D.W.: A cluster separation measure. IEEE Trans. Pattern Anal. Mach. Intell. PAMI-1(2), 224–227 (1979)
Dutta, D., Dutta, P., Sil, J.: Simultaneous continuous feature selection and k clustering by multi objective genetic algorithm. In: 2013 3rd IEEE International Advance Computing Conference (IACC), pp. 937–942 (2013)
ElShawi, R., Sakr, S.: TPE-autoclust: a tree-based pipline ensemble framework for automated clustering. In: 2022 IEEE International Conference on Data Mining Workshops (ICDMW), pp. 1144–1153 (2022)
Enes, J., Expósito, R.R., Fuentes, J., Cacheiro, J.L., Touriño, J.: A pipeline architecture for feature-based unsupervised clustering using multivariate time series from HPC jobs. Inf. Fusion 93, 1–20 (2023)
Francia, M., Giovanelli, J., Pisano, G.: Hamlet: a framework for human-centered automl via structured argumentation. Futur. Gener. Comput. Syst. 142, 182–194 (2023)
Fränti, P., Sieranoja, S.: K-means properties on six clustering benchmark datasets (2018)
Gagolewski, M.: A framework for benchmarking clustering algorithms. SoftwareX 20, 101270 (2022)
Giovanelli, J., Bilalli, B., Abelló, A.: Data pre-processing pipeline generation for autoETL. Inf. Syst. 108, 101957 (2022)
Hancer, E.: A new multi-objective differential evolution approach for simultaneous clustering and feature selection. Eng. Appl. Artif. Intell. 87, 103307 (2020)
Huang, J., Ng, M., Rong, H., Li, Z.: Automated variable weighting in k-means type clustering. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 657–668 (2005)
Hutter, F., Hoos, H.H., Leyton-Brown, K.: Sequential model-based optimization for general algorithm configuration. In: Coello, C.A.C. (ed.) LION 2011. LNCS, vol. 6683, pp. 507–523. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-25566-3_40
Kamoshida, R., Ishikawa, F.: Automated clustering and knowledge acquisition support for beginners. Procedia Comput. Sci. 176, 1596–1605 (2020)
Lensen, A., Xue, B., Zhang, M.: Using particle swarm optimisation and the silhouette metric to estimate the number of clusters, select features, and perform clustering. In: Squillero, G., Sim, K. (eds.) EvoApplications 2017. LNCS, vol. 10199, pp. 538–554. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-55849-3_35
Liu, F.T., Ting, K.M., Zhou, Z.H.: Isolation-based anomaly detection. ACM Trans. Knowl. Discov. Data (TKDD) 6(1), 1–39 (2012)
Liu, Y., Li, S., Tian, W.: AutoCluster: meta-learning based ensemble method for automated unsupervised clustering. In: Karlapalem, K., et al. (eds.) PAKDD 2021. LNCS (LNAI), vol. 12714, pp. 246–258. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-75768-7_20
Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learni. Res. 9(11) (2008)
Murtagh, F., Contreras, P.: Algorithms for hierarchical clustering: an overview. Wiley Interdisc. Rev. Data Min. Knowl. Discov. 7(6) (2017)
Poulakis, Y., Doulkeridis, C., Kyriazis, D.: Autoclust: a framework for automated clustering based on cluster validity indices. In: ICDM, pp. 1220–1225. IEEE (2020)
Prakash, J., Singh, P.K.: Gravitational search algorithm and k-means for simultaneous feature selection and data clustering: a multi-objective approach. Soft. Comput. 23(6), 2083–2100 (2019)
Saha, S., Spandana, R., Ekbal, A., Bandyopadhyay, S.: Simultaneous feature selection and symmetry based clustering using multiobjective framework. Appl. Soft Comput. 29(C), 479–486 (2015)
Sobol, I.: The distribution of points in a cube and the accurate evaluation of integrals (in Russian) zh. Vychisl. Mat. i Mater. Phys 7, 784–802 (1967)
Thornton, C., Hutter, F., Hoos, H.H., Leyton-Brown, K.: Auto-Weka: combined selection and hyperparameter optimization of classification algorithms. In: Proceedings of the 19th ACM SIGKDD, pp. 847–855 (2013)
Thrun, M.C., Ultsch, A.: Clustering benchmark datasets exploiting the fundamental clustering problems. Data Brief 30, 105501 (2020)
Toch, E., Lerner, B., Ben-Zion, E., Ben-Gal, I.: Analyzing large-scale human mobility data: a survey of machine learning methods and applications. Knowl. Inf. Syst. 58(3), 501–523 (2019)
Tschechlov, D., Fritz, M., Schwarz, H.: Automl4clust: efficient autoML for clustering analyses, pp. 343–348 (2021)
Vieira, M.R., et al.: On query result diversification. In: 27th IEEE International Conference on Data Engineering (ICDE), pp. 1163–1174. IEEE (2011)
Vinh, N.X., Epps, J., Bailey, J.: Information theoretic measures for clusterings comparison: is a correction for chance necessary? In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1073–1080 (2009)
Zhao, Z., Liu, H.: Spectral feature selection for supervised and unsupervised learning. In: Proceedings of the 24th International Conference on Machine Learning (2007)
Zhu, L., Ma, B., Zhao, X.: Clustering validity analysis based on silhouette coefficient. J. Comput. Appl. 30(2), 139–141 (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Francia, M., Giovanelli, J., Golfarelli, M. (2024). AutoClues: Exploring Clustering Pipelines via AutoML and Diversification. In: Yang, DN., Xie, X., Tseng, V.S., Pei, J., Huang, JW., Lin, J.CW. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2024. Lecture Notes in Computer Science(), vol 14645. Springer, Singapore. https://doi.org/10.1007/978-981-97-2242-6_20
Download citation
DOI: https://doi.org/10.1007/978-981-97-2242-6_20
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-2241-9
Online ISBN: 978-981-97-2242-6
eBook Packages: Computer ScienceComputer Science (R0)