Skip to main content

Boost-wise pre-loaded mixture of experts for classification tasks

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

A modified version of Boosted Mixture of Experts (BME) is presented in this paper. While previous related works, namely BME, attempt to improve the performance by incorporating complementary features of a hybrid combining framework, they have some drawback. Analyzing the problems of previous approaches has suggested several modifications that have led us to propose a new method called Boost-wise Pre-loaded Mixture of Experts (BPME). We present a modification in pre-loading (initialization) procedure of ME, which addresses previous problems and overcomes them by employing a two-stage pre-loading procedure. In this approach, both the error and confidence measures are used as the difficulty criteria in boost-wise partitioning of problem space.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Waterhouse S, Cook G (1997) Ensemble methods for phoneme classification. In: Mozer M, Jordan J, Petsche T (eds) Advances in neural information processing systems. MIT Press, Cambridge

    Google Scholar 

  2. Avnimelech R, Intrator N (1999) Boosted mixture of experts: an ensemble learning scheme. Neural Comput 11(2):483–497

    Article  Google Scholar 

  3. Riad T, Hocine B, Salima M (2012) New direct torque neuro-fuzzy control based SVM-three level inverter-fed induction motor. Int J Control Autom Syst 8(2):425–432

    Article  Google Scholar 

  4. Chen CH, Liang YW, Liaw DC et al (2010) Design of midcourse guidance laws via a combination of fuzzy and SMC approaches. Int J Control Autom Syst 8(2):272–278

    Article  Google Scholar 

  5. Kwon WY, Suh IH, Lee S (2011) SSPQL: stochastic shortest path-based Q-learning. Int J Control Autom Syst 9(2):328–338

    Article  Google Scholar 

  6. Yu Z, Nam MY, Sedai S et al (2009) Evolutionary fusion of a multi-classifier system for efficient face recognition. Int J Control Autom Syst 7(1):33–40

    Article  Google Scholar 

  7. Ebrahimpour R, Sadeghnejad N, Amiri A (2010) Low resolution face recognition using combination of diverse classifiers. International conference on soft computing and pattern recognition (SoCPaR), pp 265–268

  8. Ebrahimpour R, Sadeghnejad N, Arani A (2011) Low resolution face recognition using mixture of experts with different representations. International conference on soft computing and pattern recognition (SoCPaR), pp 475–480

  9. Kuncheva LI (2004) Combining pattern classifiers: methods and algorithms. Wiley, New York

    Book  MATH  Google Scholar 

  10. Tumer K, Ghosh J (1996) Error correlation and error reduction in ensemble classifiers. Connect Sci 8:385–404

    Article  Google Scholar 

  11. Jacobs RA (1997) Bias/variance analyses of mixtures-of-experts architectures. Neural Comput 9(2):369–383

    Article  MATH  Google Scholar 

  12. Sharkey AJC (1996) On combining artificial neural nets. Connect Sci 8:299–314

    Article  Google Scholar 

  13. Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140

    MathSciNet  MATH  Google Scholar 

  14. Schapire RE (1990) The strength of weak learnability. Mach Learn 5(2):197–227

    Google Scholar 

  15. Liu Y, Yao X (1999) Simultaneous training of negatively correlated neural networks in an ensemble. IEEE Trans Syst Man Cybern Part B Cybern 29(6):716–725

    Article  Google Scholar 

  16. Jacobs RA, Jordan MI, Nowlan SJ, Hinton GE (1991) Adaptive mixtures of local experts. Neural Comput 3:79–87

    Article  Google Scholar 

  17. Islam MM, Yao X, Nirjon SMS et al (2008) Bagging and boosting negatively correlated neural networks. IEEE Trans Syst Man Cybern Part B Cybern 38(3):771–784

    Article  Google Scholar 

  18. Polikar R (2007) Bootstrap inspired techniques in computational intelligence. IEEE Signal Process Mag 24(4):56–72

    Article  Google Scholar 

  19. Wang W, Jones P, Partridge D (2000) Diversity between neural networks and decision trees for building multiple classifier systems. In: Kittler J, Roli F (eds) Multiple classifier systems. Ser. Lecture Notes in Computer Science, vol 1857. Springer, Cagliari, pp 240–249

  20. Tang MI, Heywood M (2002) Shepherd, input partitioning to mixture of experts, in proc. International Joint Conference on Neural, pp 227–232

  21. Hansen JV (1999) Combining predictors: comparison of five meta machine learning methods. Inform Sci 119:91–105

    Article  Google Scholar 

  22. Ebrahimpour R, Kabir E, Yousefi MR (2008) Teacher-directed learning in view-independent face recognition with mixture of experts using overlapping eigenspaces. Comput Vis Image Underst 111(2):195–206

    Article  Google Scholar 

  23. Hansen JV (1999) Combining predictors: comparison of five Meta machine learning methods. Inform Sci 119(1–2):91–105

    Article  Google Scholar 

  24. Frank A, Asuncion A (2010) UCI machine learning repository [http://archive.ics.uci.edu/ml]. University of California, School of Information and Computer Science, Irvine

  25. http://www.dice.ucl.ac.be/neural-nets/Research/Projects/ELENA/elena.htm

  26. Jacobs RA, Jordan MI, Barto AG (1991) Task decomposition through competition in a modular connectionist architecture—the what and where vision tasks. Cogn Sci 15:219–250

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Reza Ebrahimpour.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ebrahimpour, R., Sadeghnejad, N., Arani, S.A.A.A. et al. Boost-wise pre-loaded mixture of experts for classification tasks. Neural Comput & Applic 22 (Suppl 1), 365–377 (2013). https://doi.org/10.1007/s00521-012-0909-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-012-0909-2

Keywords