Abstract
To learn any problem, many classifiers have been introduced so far. Each of these classifiers has many strengths (positive aspects) and weaknesses (negative aspects) that make it suitable for some specific problems. But there is no powerful solution to indicate which classifier is the best classifier (or at least a good one) for a special problem. Fortunately the ensemble learning provides us with a powerful approach to prepare a near-to-optimum classifying system for any given problem. How to create a suitable ensemble of base classifiers is the most challenging problem in classifier ensemble. An ensemble vitally needs diversity. It means that if a pool of classifiers wants to be successful as an ensemble, they must be diverse enough to cover the errors of each other. So during creation of an ensemble, we need a mechanism to guarantee the ensemble classifiers are diversity. Sometimes this mechanism is to select/remove a subset of the produced base classifiers with the aim of maintaining the diversity among the ensemble. This paper proposes an innovative ensemble creation named the Classifier Selection Based on Clustering (CSBC). The CSBC guarantees the necessary diversity among ensemble classifiers, using the clustering of classifiers technique. It uses bagging as generator of the base classifiers. After producing a large number of the base classifiers, CSBC partitions them using a clustering algorithm. After that by selecting one classifier from each cluster, CSBC produces the final ensemble. The weighted majority vote method is taken as aggregator function of the ensemble. Here it is probed how the cluster number affects the performance of the CSBC method and how we can choose a good approximate value for cluster number in any dataset adaptively. We expand our studies on a large number of real datasets of UCI repository to reach a decisive conclusion.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html
Breiman, L.: Bagging Predictors. Journal of Machine Learning 24(2), 123–140 (1996)
Breiman, L.: Random Forests. Machine Learning 45(1), 5–32 (2001)
Freund, Y., Schapire, R.E.: A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. Journal Computer Syst. Sci. 55(1), 119–139 (1997)
Giacinto, G., Roli, F.: An approach to the automatic design of multiple classifier systems. Pattern Recognition Letters 22, 25–33 (2001)
Günter, S., Bunke, H.: Creation of Classifier Ensembles for Handwritten Word Recognition Using Feature Selection Algorithms. In: Proceedings of the Eighth International Workshop on Frontiers in Handwriting Recognition, p. 183 (2002)
Kuncheva, L.I.: Combining Pattern Classifiers, Methods and Algorithms. Wiley, New York (2005)
Minaei-Bidgoli, B., Parvin, H., Alinejad-Rokny, H., Alizadeh, H., Punch, W.F.: Effects of resampling method and adaptation on clustering ensemble efficacy. AIR 41(1), 27–48 (2014)
Parvin, H., Minaei-Bidgoli, B., Shahpar, H.: Classifier Selection by Clustering. In: Martínez-Trinidad, J.F., Carrasco-Ochoa, J.A., Ben-Youssef Brants, C., Hancock, E.R. (eds.) MCPR 2011. LNCS, vol. 6718, pp. 60–66. Springer, Heidelberg (2011)
Parvin, H., Alinejad-Rokny, H., Minaei-Bidgoli, B., Parvin, S.: A new classifier ensemble methodology based on subspace learning. JETAI 25(2), 227–250 (2013)
Parvin, H., Minaei-Bidgoli, B., Alinejad-Rokny, H., Punch, W.F.: Data weighing mechanisms for clustering ensembles. CEE 39(5), 1433–1450 (2013)
Parvin, H., Minaei-Bidgoli, B.: A clustering ensemble framework based on elite selection of weighted clusters. Adv. Data Analysis and Classification 7(2), 181–208 (2013)
Parvin, H., Beigi, A., Mozayani, N.: A Clustering Ensemble Learning Method Based on the Ant Colony Clustering Algorithm. An International Journal of Applied and Computational Mathematics 11, 286–302 (2012)
Parvin, H., MirnabiBaboli, M., Alinejad, H.: Proposing a Classifier Ensemble Framework based on Classifier Selection and Decision Tree. In: Engineering Applications of Artificial Intelligence, EAAI, pp. 34–42 (2014)
Peña, J.M.: Finding Consensus Bayesian Network Structures. Journal of Artificial Intelligence Research 42, 661–687 (2011)
Khashei, M., Bijari, M.: An Artificial Neural Network (p, d, q) Model for Timeseries Forecasting. Expert Systems with Applications 37, 479–489 (2010)
Pazos, A.B.P., Gonzalez, A.A., Pazos, F.M.: Artificial NeuroGlial Networks. In: Encyclopedia of Artificial Intelligence, New York, pp. 167–171 (2009)
Kuncheva, L.I., Whitaker, C.: Measures of diversity in classifier ensembles and their relationship with ensemble accuracy. Machine Learning, 181–207 (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Jamnejad, M.I., Parvin, S., Heidarzadegan, A., Moshki, M. (2014). A Meta Classifier by Clustering of Classifiers. In: Gelbukh, A., Espinoza, F.C., Galicia-Haro, S.N. (eds) Nature-Inspired Computation and Machine Learning. MICAI 2014. Lecture Notes in Computer Science(), vol 8857. Springer, Cham. https://doi.org/10.1007/978-3-319-13650-9_13
Download citation
DOI: https://doi.org/10.1007/978-3-319-13650-9_13
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-13649-3
Online ISBN: 978-3-319-13650-9
eBook Packages: Computer ScienceComputer Science (R0)