skip to main content
10.1145/3396474.3396484acmotherconferencesArticle/Chapter ViewAbstractPublication PagesismsiConference Proceedingsconference-collections
research-article

Mixtures of Heterogeneous Experts

Authors Info & Claims
Published:30 May 2020Publication History

ABSTRACT

No single machine learning algorithm is most accurate for all problems due to the effect of an algorithm's inductive bias. Research has shown that a combination of experts of the same type, referred to as a mixture of homogeneous experts, can increase the accuracy of ensembles by reducing the adverse effect of an algorithm's inductive bias. However, the predictive power of a mixture of homogeneous experts is still limited by the inductive bias of the algorithm that makes up the mixture. For this reason, combinations of different machine learning algorithms, referred to as a mixture of heterogeneous experts, has been proposed to take advantage of the strengths of different machine learning algorithms and to reduce the adverse effects of the inductive biases of these algorithms. This paper presents a mixture of heterogeneous experts, and evaluates its performance to that of a number of mixtures of homogeneous experts on a set of classification problems. The results indicate that a mixture of heterogeneous experts aggregates the advantages of experts, increasing the accuracy of predictions. The mixture of heterogeneous experts not only outperformed all homogeneous ensembles on two of the datasets, but also achieved the best overall accuracy rank across the various datasets.

References

  1. A.L. Coelho and D.S. Nascimento. 2010. On the evolutionary design of heterogeneous bagging models. Neurocomputing 73, 16-18 (2010), 3319--3322.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. J. Derrac, S. García, D. Molina, and F. Herrera. 2011. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation 1, 1 (2011), 3--18.Google ScholarGoogle Scholar
  3. L.K. Hansen and P. Salamon P. 1990. Neural network ensembles. IEEE Transactions on Pattern Analysis & Machine Intelligence 12, 10 (1990), 993--1001.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. K-W. Hsu. 2012. Hybrid ensembles of decision trees and artificial neural networks. In Proceedings of the IEEE International Conference on Computational Intelligence and Cybernetics. 25--29.Google ScholarGoogle ScholarCross RefCross Ref
  5. K-W. Hsu. 2017. A theoretical analysis of why hybrid ensembles work. Computational Intelligence and Neuroscience (2017), 1--12.Google ScholarGoogle Scholar
  6. T.M. Mitchell. 1980. The need for biases in learning generalizations. Technical Report CBM-TR-117. Rutgers University.Google ScholarGoogle Scholar
  7. D. Opitz and R. Maclin. 1999. Popular ensemble methods: An empirical study. Journal of Artificial Intelligence Research 11 (1999), 169--198.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. C-F. Tsai, Y-C. Lin, D.C. Yen, and Y-M. Chen. 2011. Predicting stock returns by classifier ensembles. Applied Soft Computing 11, 2 (2011), 2452--2459.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Mixtures of Heterogeneous Experts

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in
            • Published in

              cover image ACM Other conferences
              ISMSI '20: Proceedings of the 2020 4th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence
              March 2020
              142 pages
              ISBN:9781450377614
              DOI:10.1145/3396474

              Copyright © 2020 ACM

              Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 30 May 2020

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • research-article
              • Research
              • Refereed limited
            • Article Metrics

              • Downloads (Last 12 months)28
              • Downloads (Last 6 weeks)7

              Other Metrics

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader