Abstract
Traditional machine learning methods have certain limitations in constructing high-precision estimation models and improving generalization ability, but ensemble learning that combines multiple different single models into one model is significantly better than that obtained by a single machine learning model. When the types of data sets are diversified and the scale is increasing, the ensemble learning algorithm has the problem of incomplete representation of features. At this time, convolutional neural network (CNN) with excellent feature learning ability makes up for the shortcomings of ensemble learning. In this paper, an ensemble learning framework for convolutional neural network based on multiple classifiers is proposed. First, this method mainly classifies UCI data sets using the ensemble learning algorithms based on multiple classifiers. Then, feature extraction is performed on the image data set MNIST using a convolutional neural network, and the extracted features are applied as input to be classified using an ensemble learning framework. The experimental results show that the accuracy of ensemble learning is higher than the accuracy of a single classifier and the accuracy of CNN + ensemble learning framework is higher than the accuracy of ensemble learning framework.
Similar content being viewed by others
References
Adama DA, Lotfi A, Langensiepen CS, Lee K, Trindade P (2018) Human activity learning for assistive robotics using a classifier ensemble. Soft Comput 22(21):7027–7039
Breiman L (1996) Bagging predictors. Int J Mach Learn 24(2):123–140
Breiman L (2001) Random Forests. Int J Alg 45(1):5–32
Breiman L, Friedman JH, Olshen RA, Stone CJ (1984) Classification and regression trees. Wadsworth. ISBN 0-534-98053-8
Chen T, Guestrin C (2016) Xgboost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 785–794
Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297
Cover TM, Hart PE (1967) Nearest neighbor pattern classification. IEEE Trans Inf Theory 13(1):21–27
Dasarathy BV, Sheela BV (1979) A composite classifier system design: concepts and methodology. Proc IEEE 67(5):708–713
Ding Z, Fei M, Dajun D, Yang F (2017) Streaming data anomaly detection method based on hyper-grid structure and online ensemble learning. Soft Comput 21(20):5905–5917
Freund Y, Schapire RE (1995) A decision-theoretic generalization of on-line learning and an application to boosting. EuroCOLT 1995:23–37
He K, Gkioxari G, Dollár P, Girshick RB (2017) Mask R-CNN. In: ICCV 2017, pp 2980–2988
Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Int J Neural Comput 18(7):1527–1554
Hinton G, Deng L, Yu D, Mohamed A-R, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath T, Dahl G, Kingsbury B (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag 29(6):82–97
Ji S, Wei S, Meng L (2019) Fully convolutional networks for multisource building extraction from an open aerial and satellite imagery data set. IEEE Trans Geosci Remote Sens 57(1):574–586
Krizhevsky A, Sutskever I, Hinton G (2012) Imagenet classification with deep convolutional neural networks. NIPS 25:1106–1114
Lewis DD (1998) Naive (Bayes) at forty: the independence assumption in information retrieval. In: The 10th Euro-pean conference on machine learning, New York, Springer, pp 4–15
Longstaff ID, Cross JF (1987) A pattern recognition approach to understanding the multi-layer perception. Pattern Recogn Lett 5(5):315–319
Rosenblatt F (1958) The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev 65(6):386–408
Schapire RE (1989) The strength of weak learnability (Extended Abstract). FOCS 1989:28–33
van den Oord A, Dieleman S, Zen H, Simonyan K, Vinyals O, Graves A, Kalchbrenner N, Senior AW, Kavukcuoglu K (2016) WaveNet: a generative model for raw audio. CoRR abs/1609.03499
Wang T, Zhang Z, Jing X, Zhang L (2016) Multiple kernel ensemble learning for software defect prediction. Autom Softw Eng 23(4):569–590
Zhang L, Shah SK, Kakadiaris IA (2017) Hierarchical multi-label classification using fully associative ensemble learning. Pattern Recogn 70:89–103
Zhang S, Zhang S, Huang T, Gao W (2018) Speech emotion recognition using deep convolutional neural network and discriminant temporal pyramid matching. IEEE Trans Multimed 20(6):1576–1590
Zhiwen Yu, Wang D, Zhuoxiong Zhao CL, Chen P, You J, Wong H-S, Zhang J (2019) Hybrid incremental ensemble learning for noisy real-world data classification. IEEE Trans Cybern 49(2):403–416
Zhou Y, Wang P (2019) An ensemble learning approach for XSS attack detection with domain knowledge and threat intelligence. Comput Secur 82:261–269
Acknowledgements
This work is supported by the National Natural Science Foundation (Nos. 61672522, 41704115), the Opening Project of Key Laboratory of Data Science and Intelligence Application [No. D1804], and Jiangsu Graduate Research and Innovation Project [No. SJKY19-1889].
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
Yanyan Guo, Xin Wang, Pengcheng Xiao and Xinzheng Xu declare that they have no conflict of interest.
Informed consent
Informed consent was not required as no human or animals were involved.
Human and animal rights
This article does not contain any studies with human or animal subjects performed by the any of the authors.
Additional information
Communicated by V. Loia.
Rights and permissions
About this article
Cite this article
Guo, Y., Wang, X., Xiao, P. et al. An ensemble learning framework for convolutional neural network based on multiple classifiers. Soft Comput 24, 3727–3735 (2020). https://doi.org/10.1007/s00500-019-04141-w
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-019-04141-w