Skip to main content
Log in

Improving combination method of NCL experts using gating network

  • Cont. Dev. of Neural Compt. & Appln.
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Negative Correlation Learning (NCL) is a popular combining method that employs special error function for the simultaneous training of base neural network (NN) experts. In this article, we propose an improved version of NCL method in which the capability of gating network, as the combining part of Mixture of Experts method, is used to combine the base NNs in the NCL ensemble method. The special error function of the NCL method encourages each NN expert to learn different parts or aspects of the training data. Thus, the local competence of the experts should be considered in the combining approach. The gating network provides a way to support this needed functionality for combining the NCL experts. So the proposed method is called Gated NCL. The improved ensemble method is compared with the previous approaches were used for combining NCL experts, including winner-take-all (WTA) and average (AVG) combining techniques, in solving several classification problems from UCI machine learning repository. The experimental results show that our proposed ensemble method significantly improved performance over the previous combining approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Kittler J, Hatef M, Duin RPW, Matas J (1998) On combining classifiers. IEEE T Pattern Anal 20(3):226–239

    Article  Google Scholar 

  2. Haykin S (1994) Neural networks: a comprehensive foundation. Prentice Hall PTR, Upper Saddle River

    MATH  Google Scholar 

  3. Abraham A, Grosan C, Tigan s (2007) Ensemble of hybrid neural network learning approaches for designing pharmaceutical drugs. Neural Comput Appl 16(3):307–316

    Article  Google Scholar 

  4. Nanni L, Lumini A (2009) Machine learning multi-classifiers for peptide classification. Neural Comput Appl 18(2):185–192

    Article  Google Scholar 

  5. Sesmero M, Alonso-Weber J, Gutiérrez G, Ledezma A, Sanchis A (2010) A new artificial neural network ensemble based on feature selection and class recoding. Neural Comput Appl 1:1–13

    Google Scholar 

  6. Dai Q (2010) The build of a dynamic classifier selection ICBP system and its application to pattern recognition. Neural Comput Appl 19(1):123–137

    Article  Google Scholar 

  7. Yu-Quan Z, Ji-Shun O, Geng C, Hai-Ping Y (2011) Dynamic weighting ensemble classifiers based on cross-validation. Neural Comput Appl 20(3):309–317

    Google Scholar 

  8. Kuncheva LI (2004) Combining pattern classifiers: methods and algorithms. Wiley-Interscience, London

    Book  MATH  Google Scholar 

  9. Tresp V, Taniguchi M (1995) Combining estimators using non-constant weighting functions. In: Tesauro G, Touretzky DS, Leen TK (eds) Advances in neural information processing systems, vol. 7. MIT Press, Cambridge, pp 419–426

  10. Meir R (1995) Bias, variance and the combination of least squares estimators; the case of least linear squares. In: Tesauro G, Touretzky DS (eds) Advances in neural information processing systems, vol. 7. MIT Press, Cambridge, pp 295–302

  11. Tumer K, Ghosh J (1996) Error correlation and error reduction in ensemble classifiers. Connect Sci 8(3):385–404

    Article  Google Scholar 

  12. Jacobs RA (1997) Bias/variance analyses of mixtures-of-experts architectures. Neural Comput 9(2):369–383

    Article  MATH  Google Scholar 

  13. Hansen JV (2000) Combining predictors: meta machine learning methods and bias/variance and ambiguity decompositions. Computer Science Dept, Aarhus Univ

    Google Scholar 

  14. Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140

    MathSciNet  MATH  Google Scholar 

  15. Schapire RE (1990) The strength of weak learn ability. Mach Learn 5(2):197–227

    Google Scholar 

  16. Liu Y, Yao X (1999) Ensemble learning via negative correlation. Neural Networks 12(10):1399–1404

    Article  Google Scholar 

  17. Jacobs RA, Jordan MI, Nowlan SJ, Hinton GE (1991) Adaptive mixtures of local experts. Neural Comput 3(1):79–87

    Article  Google Scholar 

  18. Islam MM, Yao X, Nirjon SMS, Islam MA, Murase K (2008) Bagging and boosting negatively correlated neural networks. IEEE T Syst Man Cy B 38(3):771–784. doi:10.1109/Tsmcb.2008.922055

    Article  Google Scholar 

  19. Waterhouse SR (1997) Classification and regression using mixtures of experts. Unpublished doctoral dissertation, Cambridge University

  20. Waterhouse S, Cook G (1997) Ensemble methods for phoneme classification. In: Mozer MC, Jordan MI, Petsche T (eds) Advances in neural information processing systems, vol. 9. MIT Press, Cambridge, pp 800–806

  21. Avnimelech R, Intrator N (1999) Boosted mixture of experts: an ensemble learning scheme. Neural Comput 11(2):483–497

    Article  Google Scholar 

  22. Liu Y, Yao X (1999) Simultaneous training of negatively correlated neural networks in an ensemble. IEEE T Syst Man Cy B 29(6):716–725

    Article  Google Scholar 

  23. Ueda N, Nakano R (1996) Generalization error of ensemble estimators. In: IEEE International Conference on Neural Networks, vol. 91. Washington, DC, pp 90–95

  24. Jacobs RA, Jordan MI, Barto AG (1991) Task decomposition through competition in a modular connectionist architecture: the what and where vision tasks. Cognitive Sci 15(2):219–250

    Article  Google Scholar 

  25. Ebrahimpour R, Nikoo H, Masoudnia S, Yousefi MR, Ghaemi MS (2010) Mixture of MLP-experts for trend forecasting of time series: a case study of the Tehran stock exchange. Int J Forecast 27(3):804–816

    Google Scholar 

  26. Rokach L (2010) Ensemble-based classifiers. Artif Intell Rev 33(1–2):1–39. doi:10.1007/s10462-009-9124-7

    Article  Google Scholar 

  27. Polikar R (2006) Ensemble based systems in decision making. Circ Sys Mag IEEE 6(3):21–45

    Article  Google Scholar 

  28. Brown G (2009) Ensemble learning. In: Sammut C, Webb G (eds) Encyclopedia of machine learning. Springer, New York

  29. Asuncion A, Newman DJ (2007) UCI machine learning repository. University of California, School of Information and Computer Science, Irvine. http://www.ics.uci.edu/~mlearn/MLRepository.html

  30. http://en.wikipedia.org/wiki/Student%27s_t-test

  31. Lowry R (2005) Concepts and applications of inferential statistics. Web Site for Statistical Computation, VassarStats

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saeed Masoudnia.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ebrahimpour, R., Arani, S.A.A.A. & Masoudnia, S. Improving combination method of NCL experts using gating network. Neural Comput & Applic 22, 95–101 (2013). https://doi.org/10.1007/s00521-011-0746-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-011-0746-8

Keywords

Navigation