Skip to main content

Global/Local Hybrid Learning of Mixture-of-Experts from Labeled and Unlabeled Data

  • Conference paper
Hybrid Artificial Intelligent Systems (HAIS 2011)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6678))

Included in the following conference series:

Abstract

The mixture-of-experts (ME) models can be useful to solve complicated classification problems in real world. However, in order to train the ME model with not only labeled data but also unlabeled data which are easier to come, a new learning algorithm that considers characteristics of the ME model is required. We proposed global-local co-training (GLCT), the hybrid training method of the ME model training method for supervised learning (SL) and the co-training, which trains the ME model in semi-supervised learning (SSL) manner. GLCT uses a global model and a local model together since using the local model only shows low accuracy due to lack of labeled training data. The models enlarge the labeled data set from the unlabeled one and are trained from it by supplementing each other. To evaluate the method, we performed experiments using benchmark data sets from UCI machine learning repository. As the result, GLCT confirmed the feasibility of itself. Moreover, a comparison experiments to show the excellences of GLCT showed better performance than the other alternative method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Jacobs, R., Jordan, M., Nowlan, S., Hinton, G.: Adaptive mixture local experts. Neural Computation 3(4), 79–87 (1991)

    Article  Google Scholar 

  2. Duda, R.O., Hart, P.E.: Pattern Classification and Scene Analysis. Wiley, Chichester (1973)

    MATH  Google Scholar 

  3. Wang, W., Zhou, Z.-H.: Analyzing co-training style algorithms. In: Kok, J.N., Koronacki, J., Lopez de Mantaras, R., Matwin, S., Mladenič, D., Skowron, A. (eds.) ECML 2007. LNCS (LNAI), vol. 4701, pp. 454–465. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  4. Blum, A., Mitchell, T.: Combining labeled and unlabeled data with co-training. In: Proc. of the 11th Annual Conf. on Computational Learning Theory, pp. 92–100 (1998)

    Google Scholar 

  5. Kiritchenko, S., Matwin, S.: Email classification with co-training. In: Proc. of the 2001 Conf. of the Centre for Advanced Studies on Collaborative Research, pp. 1–10 (2001)

    Google Scholar 

  6. Fang, Y., Cheng, J., Wang, J., Wang, K., Liu, J., Lu, H.: Hand posture recognition with co-training. In: Proc. of Int’l Conf. on Pattern Recognition, p. 104 (2008)

    Google Scholar 

  7. Nigam, K., Mccallum, A.K., Thrun, S., Mitchell, T.: Text classification from labeled and unlabeled documents using EM. Machine Learning 39(2–3), 103–134 (2000)

    Article  MATH  Google Scholar 

  8. Nigam, K., Ghani, R.: Analyzing the effectiveness and applicability of co-training. In: Proc. of the 9th Int’l Conf. on Information and Knowledge Management, pp. 86–93 (2000)

    Google Scholar 

  9. Goldman, S., Zhou, Y.: Enhancing supervised learning with unlabeled data. In: Proc. of the 17th Int’l Conf. on Machine Learning, pp. 327–334 (2000)

    Google Scholar 

  10. Ubeyli, E.D.: Wavelet/mixture of experts network structure for EEG signals classification. Expert Systems with Applications 34(3), 1954–1962 (2008)

    Article  Google Scholar 

  11. Ebrahimpour, R., Kabir, E., Esteky, H., Yousefi, M.R.: View-independent face recognition with mixture of experts. Neurocomputing 71(4–6), 1103–1107 (2008)

    Article  MATH  Google Scholar 

  12. Ebrahimpour, R., Kabir, E., Yousefi, M.R.: Face detection using mixture of MLP experts. Neural Processing Letters 26(1), 69–82 (2007)

    Article  MATH  Google Scholar 

  13. Lima, C.A.M., Coelho, A.L.V., Zuben, F.J.V.: Hybridizing mixtures of experts with support vector machines: Investigation into nonlinear dynamic systems identification. Information Sciences 177(10), 2049–2074 (2007)

    Article  Google Scholar 

  14. Ko, A.H.R., Sabourin, R., Britto Jr., A.S.: From dynamic classifier selection to dynamic ensemble selection. Pattern Recognition 41(5), 1718–1731 (2008)

    Article  MATH  Google Scholar 

  15. Karakoulas, G., Salakhutdinov. R.: Semi-supervised mixture-of-experts classification. In: Proc. of 4th IEEE Int’l Conf. on Data Mining, pp. 138–145 (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yoon, JW., Cho, SB. (2011). Global/Local Hybrid Learning of Mixture-of-Experts from Labeled and Unlabeled Data. In: Corchado, E., Kurzyński, M., Woźniak, M. (eds) Hybrid Artificial Intelligent Systems. HAIS 2011. Lecture Notes in Computer Science(), vol 6678. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21219-2_57

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-21219-2_57

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-21218-5

  • Online ISBN: 978-3-642-21219-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics