Abstract
A modified version of Boosted Mixture of Experts (BME) is presented in this paper. While previous related works, namely BME, attempt to improve the performance by incorporating complementary features of a hybrid combining framework, they have some drawback. Analyzing the problems of previous approaches has suggested several modifications that have led us to propose a new method called Boost-wise Pre-loaded Mixture of Experts (BPME). We present a modification in pre-loading (initialization) procedure of ME, which addresses previous problems and overcomes them by employing a two-stage pre-loading procedure. In this approach, both the error and confidence measures are used as the difficulty criteria in boost-wise partitioning of problem space.







Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.References
Waterhouse S, Cook G (1997) Ensemble methods for phoneme classification. In: Mozer M, Jordan J, Petsche T (eds) Advances in neural information processing systems. MIT Press, Cambridge
Avnimelech R, Intrator N (1999) Boosted mixture of experts: an ensemble learning scheme. Neural Comput 11(2):483–497
Riad T, Hocine B, Salima M (2012) New direct torque neuro-fuzzy control based SVM-three level inverter-fed induction motor. Int J Control Autom Syst 8(2):425–432
Chen CH, Liang YW, Liaw DC et al (2010) Design of midcourse guidance laws via a combination of fuzzy and SMC approaches. Int J Control Autom Syst 8(2):272–278
Kwon WY, Suh IH, Lee S (2011) SSPQL: stochastic shortest path-based Q-learning. Int J Control Autom Syst 9(2):328–338
Yu Z, Nam MY, Sedai S et al (2009) Evolutionary fusion of a multi-classifier system for efficient face recognition. Int J Control Autom Syst 7(1):33–40
Ebrahimpour R, Sadeghnejad N, Amiri A (2010) Low resolution face recognition using combination of diverse classifiers. International conference on soft computing and pattern recognition (SoCPaR), pp 265–268
Ebrahimpour R, Sadeghnejad N, Arani A (2011) Low resolution face recognition using mixture of experts with different representations. International conference on soft computing and pattern recognition (SoCPaR), pp 475–480
Kuncheva LI (2004) Combining pattern classifiers: methods and algorithms. Wiley, New York
Tumer K, Ghosh J (1996) Error correlation and error reduction in ensemble classifiers. Connect Sci 8:385–404
Jacobs RA (1997) Bias/variance analyses of mixtures-of-experts architectures. Neural Comput 9(2):369–383
Sharkey AJC (1996) On combining artificial neural nets. Connect Sci 8:299–314
Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140
Schapire RE (1990) The strength of weak learnability. Mach Learn 5(2):197–227
Liu Y, Yao X (1999) Simultaneous training of negatively correlated neural networks in an ensemble. IEEE Trans Syst Man Cybern Part B Cybern 29(6):716–725
Jacobs RA, Jordan MI, Nowlan SJ, Hinton GE (1991) Adaptive mixtures of local experts. Neural Comput 3:79–87
Islam MM, Yao X, Nirjon SMS et al (2008) Bagging and boosting negatively correlated neural networks. IEEE Trans Syst Man Cybern Part B Cybern 38(3):771–784
Polikar R (2007) Bootstrap inspired techniques in computational intelligence. IEEE Signal Process Mag 24(4):56–72
Wang W, Jones P, Partridge D (2000) Diversity between neural networks and decision trees for building multiple classifier systems. In: Kittler J, Roli F (eds) Multiple classifier systems. Ser. Lecture Notes in Computer Science, vol 1857. Springer, Cagliari, pp 240–249
Tang MI, Heywood M (2002) Shepherd, input partitioning to mixture of experts, in proc. International Joint Conference on Neural, pp 227–232
Hansen JV (1999) Combining predictors: comparison of five meta machine learning methods. Inform Sci 119:91–105
Ebrahimpour R, Kabir E, Yousefi MR (2008) Teacher-directed learning in view-independent face recognition with mixture of experts using overlapping eigenspaces. Comput Vis Image Underst 111(2):195–206
Hansen JV (1999) Combining predictors: comparison of five Meta machine learning methods. Inform Sci 119(1–2):91–105
Frank A, Asuncion A (2010) UCI machine learning repository [http://archive.ics.uci.edu/ml]. University of California, School of Information and Computer Science, Irvine
http://www.dice.ucl.ac.be/neural-nets/Research/Projects/ELENA/elena.htm
Jacobs RA, Jordan MI, Barto AG (1991) Task decomposition through competition in a modular connectionist architecture—the what and where vision tasks. Cogn Sci 15:219–250
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Ebrahimpour, R., Sadeghnejad, N., Arani, S.A.A.A. et al. Boost-wise pre-loaded mixture of experts for classification tasks. Neural Comput & Applic 22 (Suppl 1), 365–377 (2013). https://doi.org/10.1007/s00521-012-0909-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-012-0909-2