Skip to main content
Log in

On ensemble techniques of weight-constrained neural networks

  • Original Paper
  • Published:
Evolving Systems Aims and scope Submit manuscript

Abstract

Ensemble learning constitutes one of the most fundamental and reliable strategies for building powerful and accurate predictive models, aiming to exploit the predictions of a number of multiple learners. In this paper, we propose two ensemble prediction models which exploit the classification performance of weight-constrained neural networks (WCNNs). The proposed models are based on Bagging and Boosting, which constitute two of the most popular strategies, to efficiently combine the predictions of WCNN classifiers. We conducted a series of experiments using a variety of benchmarks from UCI repository in order to evaluate the performance of the two proposed models against other state-of-the-art ensemble classifiers. The reported experimental results illustrate the prediction accuracy of the proposed models providing empirical evidence that the hybridization of ensemble learning and WCNNs can build efficient and powerful classification models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Baldi P, Sadowski PJ (2013) Understanding dropout. In: Advances in neural information processing systems, pp. 2814–2822

  • Bauer E, Kohavi R (1999) An empirical comparison of voting classification algorithms: bagging, boosting, and variants. Mach Learn 36(1–2):105–139

    Article  Google Scholar 

  • Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140

    MATH  Google Scholar 

  • Bühlmann P (2012) Bagging, boosting and ensemble methods. In: Handbook of computational statistics. Springer, Berlin, Heidelberg, pp 985–1022

    Chapter  Google Scholar 

  • Demuth H, Beale M, De Jess O, Hagan M (2014) Neural network design. Martin Hagan, USA

    Google Scholar 

  • Dolan E, Moré J (2002) Benchmarking optimization software with performance profiles. Math Progr 91:201–213

    Article  MathSciNet  Google Scholar 

  • Dua, D, Karra Taniskidou E (2017) ‘UCI machine learning repository’. http://archive.ics.uci.edu/ml. Accessed 2019

  • Farzaneh V, Ghodsvali A, Bakhshabadi H, Dolatabadi Z, Farzaneh F, Carvalho I, Sarabandi K (2018) Screening of the alterations in qualitative characteristics of grape under the impacts of storage and harvest times using artificial neural network. Evol Syst 9(1):81–89

    Article  Google Scholar 

  • Freund Y, Schapire R (1996) Experiments with a new boosting algorithm. In: Proceedings of the thirteenth international conference on international conference on machine learning, pp 148–156

  • Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, Cambridge, USA

    MATH  Google Scholar 

  • Hager WW, Zhang H (2006) Algorithm 851: CG\_DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans Math Softw (TOMS) 32(1):113–137

    Article  Google Scholar 

  • Hara K, Saitoh D, Shouno H (2016) Analysis of dropout learning regarded as ensemble learning. In: International Conference on artificial neural networks, Springer, pp 72–79

  • Hinton E, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov RR (2012) Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580

  • Hinton G, Vinyals O, Dean J (2014) ‘Dark knowledge. In: Presented as the keynote in BayLearn 2

  • Iliadis L, Mansfield S, Avramidis S, El-Kassaby Y (2013) Predicting Douglas-fir wood density by artificial neural networks (ANN) based on progeny testing information. Holzforschung 67(7):771–777

    Article  Google Scholar 

  • Khwaja A, Naeem M, Anpalagan A, Venetsanopoulos A, Venkatesh B (2015) Improved short-term load forecasting using bagged neural networks. Electr Power Syst Res 125:109–115

    Article  Google Scholar 

  • Khwaja A, Zhang X, Anpalagan A, Venkatesh B (2017) Boosted neural networks for improved short-term electric load forecasting. Electr Power Syst Res 143:431–437

    Article  Google Scholar 

  • Kim M, Kang D (2010) Ensemble with neural networks for bankruptcy prediction. Expert Syst Appl 37(4):3373–3379

    Article  Google Scholar 

  • Kotsiantis S, Pintelas P (2004) Combining bagging and boosting. Int J Comput Intell 1(4):324–333

    Google Scholar 

  • Lango M, Stefanowski J (2018) Multi-class and feature selection extensions of roughly balanced bagging for imbalanced data. J Int Inf Syst 50(1):97–127

    Article  Google Scholar 

  • Leung K, Parker D (2003) Empirical comparisons of various voting methods in bagging. In: Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, pp 595–600

  • Livieris I (2019a) Forecasting economy-related data utilizing constrained recurrent neural networks. Algorithms 12:85. https://doi.org/10.3390/a12040085

    Article  MathSciNet  MATH  Google Scholar 

  • Livieris I (2019b) Improving the classification efficiency of an ANN utilizing a new training methodology. Informatics 6(1):1. https://doi.org/10.3390/informatics6010001

    Article  Google Scholar 

  • Livieris I, Kotsilieris T, Stavroyiannis S, Pintelas P (2019) Forecasting stock price index movement using a constrained deep neural network training algorithm. Intell Decis Technol (accepted for publication)

  • Livieris I, Pintelas P (2019a) An adaptive nonmonotone active set -weight constrained-neural network training algorithm. Neurocomputing 360:294–303

    Article  Google Scholar 

  • Livieris I, Pintelas P (2019b) An improved weight-constrained neural network training algorithm. Neural Comput Appl. https://doi.org/10.1007/s00521-019-04342-2

    Article  Google Scholar 

  • Maren A, Harston C, Pap R (2014) Handbook of neural computing applications. Academic Press, Cambridge, England

    MATH  Google Scholar 

  • Odior A (2013) Application of neural network and fuzzy model to grinding process control. Evol Syst 4(3):195–201

    Article  Google Scholar 

  • Opitz DW, Maclin RF (1997) An empirical evaluation of bagging and boosting for artificial neural networks. In: Proceedings of International Conference on Neural Networks (ICNN’97)’, vol. 3, IEEE, pp 1401–1405

  • Patel J, Fioranelli F, Ritchie M, Griffiths H (2018) Multistatic radar classification of armed vs unarmed personnel using neural networks. Evol Syst 9(2):135–144

    Article  Google Scholar 

  • Phaisangittisagul E (2016) An analysis of the regularization between l2 and dropout in single hidden layer neural network. In: 2016 7th International Conference on intelligent systems, modelling and simulation (ISMS)’, IEEE, pp 174–179

  • Reed R, Marks RJ II (1999) Neural smithing: supervised learning in feedforward artificial neural networks. MIT Press, Cambridge, England

    Book  Google Scholar 

  • Rokach L (2010) Ensemble-based classifiers. Artif Intell Rev 33(1–2):1–39

    Article  Google Scholar 

  • Sesmero M, Ledezma A, Sanchis A (2015) Generating ensembles of heterogeneous classifiers using stacked generalization. Wiley Interdiscipl Rev Data Min Knowl Discov 5(1):21–34

    Article  Google Scholar 

  • Srivastava N, Hinton GE, Krizhevsky A, Sutskever I, Salakhutdinov RR (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958

    MathSciNet  MATH  Google Scholar 

  • Suzuki K (2013) Artificial neural networks: architectures and applications. BoD–Books on Demand, Chicago, USA

  • Wu X, Kumar V (2009) The top ten algorithms in data mining. CRC Press, Boca Roton, USA

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ioannis E. Livieris.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Livieris, I.E., Iliadis, L. & Pintelas, P. On ensemble techniques of weight-constrained neural networks. Evolving Systems 12, 155–167 (2021). https://doi.org/10.1007/s12530-019-09324-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12530-019-09324-2

Keywords

Navigation