Skip to main content

A Single Loop EM Algorithm for the Mixture of Experts Architecture

  • Conference paper
Book cover Advances in Neural Networks – ISNN 2009 (ISNN 2009)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5552))

Included in the following conference series:

Abstract

The mixture of experts (ME) architecture is a powerful neural network model for supervised learning, which contains a number of ‘‘expert’’networks plus a gating network. The expectation-maximization (EM) algorithm can be used to learn the parameters of the ME architecture. In fact, there have already existed several methods to implement the EM algorithm, such as the IRLS algorithm, the ECM algorithm, and an approximation to the Newton-Raphson algorithm. The differences among these implementations rely on how to train the gating network, which results in a double-loop training procedure, i.e., there is an inner loop training procedure within the general or outer loop training procedure. In this paper, we propose a least mean square regression method to learn or compute the parameters for the gating network directly, which leads to a single loop (i.e., there is no inner loop training) EM algorithm for the ME architecture. It is demonstrated by the simulation experiments that our proposed EM algorithm outperforms the existing ones on both speed and classification accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Computation 3, 79–87 (1991)

    Article  Google Scholar 

  2. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximun likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Soceity B 39, 1–38 (1977)

    MATH  Google Scholar 

  3. Redner, R.A., Walker, H.F.: Mixture densities, maximum likelihood, and the EM algorithm. SIAM Review 26, 195–239 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  4. Ma, J., Xu, L., Jordan, M.I.: Asymptotic convergence rate of the EM algorithm for Gaussian mixtures. Neural Computation 12, 2881–2907 (2000)

    Article  Google Scholar 

  5. Ma, J., Xu, L.: Asymptotic convergence properties of the EM algorithm with respect to the overlap in the mixture. Neurocomputing 68, 105–129 (2005)

    Article  Google Scholar 

  6. Ma, J., Fu, S.: On the correct convergence of the EM algorithm for Gaussian mixtures. Pattern Recognition 38(12), 2602–2611 (2005)

    Article  Google Scholar 

  7. Jordan, M.I., Jacobs, R.A.: Hierachical mixtures of experts and the EM algorithm. Neural Computation 6, 181–214 (1994)

    Article  Google Scholar 

  8. Jordan, M.I., Xu, L.: Convergence Results for the EM Approach to Mixtures of Experts Architectures. Neural Computation 8(9), 1409–1431 (1995)

    Google Scholar 

  9. Chen, K., Xu, L.: Improved learning algorithms for mixture of experts in multiclass classification. Neural Networks 12(9), 1229–1252 (1999)

    Article  MathSciNet  Google Scholar 

  10. Ng, S.K., McLachlan, G.J.: Using the EM Algorithm to Train Neural Networks: Misconceptions and a New Algorithm for Multiclass Classification. IEEE transactions on neural networks 15(3), 738–749 (2004)

    Article  Google Scholar 

  11. Ng, S.K., McLachlan, G.J.: Extension of Mixture-of-experts networks for binary classification of hierachical data. Artificial Intelligence in Medicine 41, 51–67 (2007)

    Article  Google Scholar 

  12. UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine, http://www.ics.uci.edu/~mlearn/MLRepository.html

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yang, Y., Ma, J. (2009). A Single Loop EM Algorithm for the Mixture of Experts Architecture . In: Yu, W., He, H., Zhang, N. (eds) Advances in Neural Networks – ISNN 2009. ISNN 2009. Lecture Notes in Computer Science, vol 5552. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01510-6_109

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-01510-6_109

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-01509-0

  • Online ISBN: 978-3-642-01510-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics