Skip to main content
Log in

Ensemble learning based on fitness Euclidean-distance ratio differential evolution for classification

  • Published:
Natural Computing Aims and scope Submit manuscript

Abstract

Ensemble learning is a system that combines a set of base learners to improve the performance in machine learning, where accuracy and diversity of base learners are two important factors. However, these two factors are usually contradictory. To address this problem, in this paper, we propose a novel ensemble learning algorithm based on fitness Euclidean-distance ratio differential evolution, to train the neural network ensemble. FEFERR_ELA employs a multimodal evolutionary algorithm that is capable of producing diverse solutions to search for optimal solutions corresponding to parameters of base learners, where each optimal solution leads to one trained model. A dynamic ensemble selection scheme is applied to select appropriate individuals for the ensemble. The proposed algorithm is evaluated on several benchmark problems and compared with some related ensemble learning models. The experimental results demonstrate that the proposed algorithm outperforms the related works and can produce the neural network ensembles with better generalization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Avidan S (2006) Spatialboost: adding spatial reasoning to adaboost. In: European conference on computer vision. Springer, Berlin, pp 386–396

  • Bauer E, Kohavi R (1999) An empirical comparison of voting classification algorithms: bagging, boosting, and variants. Mach Learn 36:105–139

    Article  Google Scholar 

  • Blake C, Merz CJ (1998) UCI repository of machine learning databases University of California, Irvine, School of Information and Computer Sciences. http://www.ics.uci.edu/~mlearn/MLRepository.html. Accessed Oct 2018

  • Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140

    MATH  Google Scholar 

  • Chen H, Yao X (2009) Regularized negative correlation learning for neural network ensembles. IEEE Trans Neural Networks 20(12):1962–1979

    Article  Google Scholar 

  • Chen H, Yao X (2010) Multiobjective neural network ensembles based on regularized negative correlation learning. IEEE Trans Knowl Data Eng 22(12):1738–1751

    Article  Google Scholar 

  • Chen WC, Tseng LY, Wu CS (2014) A unified evolutionary training scheme for single and ensemble of feedforward neural network. Neurocomputing 143:347–361

    Article  Google Scholar 

  • Chou JS, Pham AD (2013) Enhanced artificial intelligence for ensemble approach to predicting high performance concrete compressive strength. Constr Build Mater 49:554–563

    Article  Google Scholar 

  • Freund Y, Schapire RE (1996) Experiments with a new boosting algorithm. In: Thirteenth international conference on international conference on machine learning, vol 96, pp 148–156

  • Furtuna R, Curteanu S, Leon F (2012) Multi-objective optimization of a stacked neural network using an evolutionary hyper-heuristic. Appl Soft Comput 12(1):133–144

    Article  Google Scholar 

  • Galar M, Fernandez A, Barrenechea E et al (2011) A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches. IEEE Trans Syst Man Cybern Part C Appl Rev 42(4):463–484

    Article  Google Scholar 

  • Gomes HM, Barddal JP, Enembreck F et al (2017) A survey on ensemble learning for data stream classification. ACM Comput Surv (CSUR) 50(2):1–36

    Article  Google Scholar 

  • Harb HM, Desuky AS (2011) Adaboost ensemble with genetic algorithm post optimization for intrusion detection. Int J Comput Sci Issues (IJCSI) 8(5):28–33

    Google Scholar 

  • Hosseini MP, Pompili D, Elisevich K et al (2018) Random ensemble learning for EEG classification. Artif Intell Med 84:146–158

    Article  Google Scholar 

  • Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70:489–501

    Article  Google Scholar 

  • Kim W, Park J, Yoo J et al (2012) Target localization using ensemble support vector regression in wireless sensor networks. IEEE Trans Cybern 43(4):1189–1198

    Article  Google Scholar 

  • Kotsiantis SB, Kanellopoulos D, Zaharakis ID (2006) Bagged averaging of regression models. In: IFIP international conference on artificial intelligence applications and innovations. Springer, Boston, pp 53–60

  • Li G, Wang S (2017) Sunspots time-series prediction based on complementary ensemble empirical mode decomposition and wavelet neural network. Math Problems Eng 2017:1–7

    Google Scholar 

  • Li H, Wang X, Ding S (2018) Research and development of neural network ensembles: a survey. Artif Intell Rev 49(4):455–479

    Article  Google Scholar 

  • Liang JJ, Qu BY, Mao XB et al (2014) Differential evolution based on fitness Euclidean-distance ratio for multimodal optimization. Neurocomputing 137:252–260

    Article  Google Scholar 

  • Liu Y, Yao X (1999) Ensemble learning via negative correlation. Neural Netw 12(10):1399–1404

    Article  Google Scholar 

  • Liu Y, Yao X, Higuchi T (2000) Evolutionary ensembles with negative correlation learning. IEEE Trans Evol Comput 4(4):380–387

    Article  Google Scholar 

  • Liu W, Wang Z, Liu X et al (2017) A survey of deep neural network architectures and their applications. Neurocomputing 234:11–26

    Article  Google Scholar 

  • Mousavi R, Eftekhari M (2015) A new ensemble learning methodology based on hybridization of classifier ensemble selection approaches. Appl Soft Comput 37:652–666

    Article  Google Scholar 

  • Niu D, Wang F, Zhang L et al (2011) Neural network ensemble modeling for nosiheptide fermentation process based on partial least squares regression. Chemometr Intell Lab Syst 105(1):125–130

    Article  Google Scholar 

  • Onan A, Korukoğlu S, Bulut H (2016) A multiobjective weighted voting ensemble classifier based on differential evolution algorithm for text sentiment classification. Expert Syst Appl 62:1–16

    Article  Google Scholar 

  • Opitz DW, Shavlik JW (1999) Generating accurate and diverse members of a neural-network ensemble. Adv Neural Info Proces Syst 8:535–541

    Google Scholar 

  • Oza NC (2005) Online bagging and boosting. In: 2005 IEEE international conference on systems, man and cybernetics, vol 3, pp 2340–2345

  • Ren Y, Zhang L, Suganthan PN (2016) Ensemble classification and regression-recent developments, applications and future directions. IEEE Comput Intell Mag 11(1):41–53

    Article  Google Scholar 

  • Rokach L (2009) Taxonomy for characterizing ensemble methods in classification tasks: a review and annotated bibliography. Comput Stat Data Anal 53(12):4046–4072

    Article  MathSciNet  Google Scholar 

  • Schapire RE, Singer Y (1999) Improved boosting algorithms using confidence-rated predictions. Mach Learn 37(3):297–336

    Article  Google Scholar 

  • Sheng W, Shan P, Chen S et al (2017) A niching evolutionary algorithm with adaptive negative correlation learning for neural network ensemble. Neurocomputing 247:173–182

    Article  Google Scholar 

  • Skalak DB (1996) The sources of increased accuracy for two proposed boosting algorithms. In: American Association for Artificial Intelligence, integrating multiple learned models workshop, pp 1129–1133

  • Sun S, Wang S, Zhang G et al (2018) A decomposition–clustering–ensemble learning approach for solar radiation forecasting. Sol Energy 163:189–199

    Article  Google Scholar 

  • Tan CJ, Lim CP, Cheah YN (2014) A multi-objective evolutionary algorithm-based ensemble optimizer for feature selection and classification with neural network models. Neurocomputing 125:217–228

    Article  Google Scholar 

  • Tang EK, Suganthan PN, Yao X (2006) An analysis of diversity measures. Mach Learn 65(1):247–271

    Article  Google Scholar 

  • Tian J, Li M, Chen F et al (2012) Coevolutionary learning of neural network ensemble for complex classification tasks. Pattern Recogn 45(4):1373–1385

    Article  Google Scholar 

  • Vezhnevets A, Vezhnevets V (2005) Modest AdaBoost-teaching AdaBoost to generalize better. Graphicon 12(5):987–997

    Google Scholar 

  • Woloszynski T, Kurzynski M, Podsiadlo P et al (2012) A measure of competence based on random classification for dynamic ensemble selection. Inf Fusion 13(3):207–213

    Article  Google Scholar 

  • Wu J (2009) A novel artificial neural network ensemble model based on k-nearest neighbor nonparametric estimation of regression function and its application for rainfall forecasting. In: 2009 international joint conference on computational sciences and optimization, vol 2. IEEE, pp 44–48

  • Xie Y, Peng M (2019) Forest fire forecasting using ensemble learning approaches. Neural Comput Appl 31(9):4541–4550

    Article  Google Scholar 

  • Yao X, Fischer M, Brown G (2001) Neural network ensembles and their application to traffic flow prediction in telecommunications networks. In: International joint conference on neural networks, vol 1. IEEE, pp 693–698

  • Zhang Y, Liu B, Yu J (2017a) A selective ensemble learning approach based on evolutionary algorithm. J Intell Fuzzy Syst 32(3):2365–2373

    Article  Google Scholar 

  • Zhang Y, Liu B, Cai J et al (2017b) Ensemble weighted extreme learning machine for imbalanced data classification based on differential evolution. Neural Comput Appl 28(1):259–267

    Article  Google Scholar 

  • Zhang Y, Fu K, Sun H et al (2018a) A multi-model ensemble method based on convolutional neural networks for aircraft detection in large remote sensing images. Remote Sens Lett 9(1):11–20

    Article  Google Scholar 

  • Zhang Q, Yang LT, Chen Z et al (2018b) A survey on deep learning for big data. Inf Fusion 42:146–157

    Article  Google Scholar 

  • Zhou Z, Chen J, Song Y et al (2017) RFSEN-ELM: selective ensemble of extreme learning machines using rotation forest for image classification. Neural Netw World 27(5):499–517

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China (Grant Nos. 61473266, 61876169 and 61673404)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jing Liang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liang, J., Wei, Y., Qu, B. et al. Ensemble learning based on fitness Euclidean-distance ratio differential evolution for classification. Nat Comput 20, 77–87 (2021). https://doi.org/10.1007/s11047-020-09791-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11047-020-09791-6

Keywords

Navigation