Skip to main content
Log in

An ensemble learning framework for convolutional neural network based on multiple classifiers

  • Methodologies and Application
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Traditional machine learning methods have certain limitations in constructing high-precision estimation models and improving generalization ability, but ensemble learning that combines multiple different single models into one model is significantly better than that obtained by a single machine learning model. When the types of data sets are diversified and the scale is increasing, the ensemble learning algorithm has the problem of incomplete representation of features. At this time, convolutional neural network (CNN) with excellent feature learning ability makes up for the shortcomings of ensemble learning. In this paper, an ensemble learning framework for convolutional neural network based on multiple classifiers is proposed. First, this method mainly classifies UCI data sets using the ensemble learning algorithms based on multiple classifiers. Then, feature extraction is performed on the image data set MNIST using a convolutional neural network, and the extracted features are applied as input to be classified using an ensemble learning framework. The experimental results show that the accuracy of ensemble learning is higher than the accuracy of a single classifier and the accuracy of CNN + ensemble learning framework is higher than the accuracy of ensemble learning framework.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Adama DA, Lotfi A, Langensiepen CS, Lee K, Trindade P (2018) Human activity learning for assistive robotics using a classifier ensemble. Soft Comput 22(21):7027–7039

    Article  Google Scholar 

  • Breiman L (1996) Bagging predictors. Int J Mach Learn 24(2):123–140

    MATH  Google Scholar 

  • Breiman L (2001) Random Forests. Int J Alg 45(1):5–32

    MATH  Google Scholar 

  • Breiman L, Friedman JH, Olshen RA, Stone CJ (1984) Classification and regression trees. Wadsworth. ISBN 0-534-98053-8

  • Chen T, Guestrin C (2016) Xgboost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 785–794

  • Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297

    MATH  Google Scholar 

  • Cover TM, Hart PE (1967) Nearest neighbor pattern classification. IEEE Trans Inf Theory 13(1):21–27

    Article  MATH  Google Scholar 

  • Dasarathy BV, Sheela BV (1979) A composite classifier system design: concepts and methodology. Proc IEEE 67(5):708–713

    Article  Google Scholar 

  • Ding Z, Fei M, Dajun D, Yang F (2017) Streaming data anomaly detection method based on hyper-grid structure and online ensemble learning. Soft Comput 21(20):5905–5917

    Article  Google Scholar 

  • Freund Y, Schapire RE (1995) A decision-theoretic generalization of on-line learning and an application to boosting. EuroCOLT 1995:23–37

    Google Scholar 

  • He K, Gkioxari G, Dollár P, Girshick RB (2017) Mask R-CNN. In: ICCV 2017, pp 2980–2988

  • Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Int J Neural Comput 18(7):1527–1554

    Article  MathSciNet  MATH  Google Scholar 

  • Hinton G, Deng L, Yu D, Mohamed A-R, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath T, Dahl G, Kingsbury B (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag 29(6):82–97

    Article  Google Scholar 

  • Ji S, Wei S, Meng L (2019) Fully convolutional networks for multisource building extraction from an open aerial and satellite imagery data set. IEEE Trans Geosci Remote Sens 57(1):574–586

    Article  Google Scholar 

  • Krizhevsky A, Sutskever I, Hinton G (2012) Imagenet classification with deep convolutional neural networks. NIPS 25:1106–1114

    Google Scholar 

  • Lewis DD (1998) Naive (Bayes) at forty: the independence assumption in information retrieval. In: The 10th Euro-pean conference on machine learning, New York, Springer, pp 4–15

    Chapter  Google Scholar 

  • Longstaff ID, Cross JF (1987) A pattern recognition approach to understanding the multi-layer perception. Pattern Recogn Lett 5(5):315–319

    Article  Google Scholar 

  • Rosenblatt F (1958) The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev 65(6):386–408

    Article  Google Scholar 

  • Schapire RE (1989) The strength of weak learnability (Extended Abstract). FOCS 1989:28–33

    Google Scholar 

  • van den Oord A, Dieleman S, Zen H, Simonyan K, Vinyals O, Graves A, Kalchbrenner N, Senior AW, Kavukcuoglu K (2016) WaveNet: a generative model for raw audio. CoRR abs/1609.03499

  • Wang T, Zhang Z, Jing X, Zhang L (2016) Multiple kernel ensemble learning for software defect prediction. Autom Softw Eng 23(4):569–590

    Article  Google Scholar 

  • Zhang L, Shah SK, Kakadiaris IA (2017) Hierarchical multi-label classification using fully associative ensemble learning. Pattern Recogn 70:89–103

    Article  Google Scholar 

  • Zhang S, Zhang S, Huang T, Gao W (2018) Speech emotion recognition using deep convolutional neural network and discriminant temporal pyramid matching. IEEE Trans Multimed 20(6):1576–1590

    Article  Google Scholar 

  • Zhiwen Yu, Wang D, Zhuoxiong Zhao CL, Chen P, You J, Wong H-S, Zhang J (2019) Hybrid incremental ensemble learning for noisy real-world data classification. IEEE Trans Cybern 49(2):403–416

    Article  Google Scholar 

  • Zhou Y, Wang P (2019) An ensemble learning approach for XSS attack detection with domain knowledge and threat intelligence. Comput Secur 82:261–269

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation (Nos. 61672522, 41704115), the Opening Project of Key Laboratory of Data Science and Intelligence Application [No. D1804], and Jiangsu Graduate Research and Innovation Project [No. SJKY19-1889].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xinzheng Xu.

Ethics declarations

Conflict of interest

Yanyan Guo, Xin Wang, Pengcheng Xiao and Xinzheng Xu declare that they have no conflict of interest.

Informed consent

Informed consent was not required as no human or animals were involved.

Human and animal rights

This article does not contain any studies with human or animal subjects performed by the any of the authors.

Additional information

Communicated by V. Loia.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guo, Y., Wang, X., Xiao, P. et al. An ensemble learning framework for convolutional neural network based on multiple classifiers. Soft Comput 24, 3727–3735 (2020). https://doi.org/10.1007/s00500-019-04141-w

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-019-04141-w

Keywords

Navigation