Skip to main content
Log in

Unsupervised Weight Parameter Estimation Method for Ensemble Learning

  • Published:
Journal of Mathematical Modelling and Algorithms

Abstract

When there are multiple trained predictors, one may want to integrate them into one predictor. However, this is challenging if the performances of the trained predictors are unknown and labeled data for evaluating their performances are not given. In this paper, a method is described that uses unlabeled data to estimate the weight parameters needed to build an ensemble predictor integrating multiple trained component predictors. It is readily derived from a mathematical model of ensemble learning based on a generalized mixture of probability density functions and corresponding information divergence measures. Numerical experiments demonstrated that the performance of our method is much better than that of simple average-based ensemble learning, even when the assumption placed on the performances of the component predictors does not hold exactly.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Amari, S.: Integration of stochastic models by minimizing α-divergence. Neural Comput. 19(10), 2780–2796 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  2. Amari, S., Nagaoka, H.: Methods of Information Geometry. American Mathematical Society, Providence (2000)

    MATH  Google Scholar 

  3. Bennett, K.P., Demiriz, A., Maclin, R.: Exploiting unlabeled data in ensemble methods. In: Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’02), pp. 289–296. Edmonton, Alberta, Canada (2002)

  4. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MATH  MathSciNet  Google Scholar 

  5. d’Alché Buc, F., Grandvalet, Y., Ambroise, C.: Exploiting unlabeled data in ensemble methods. In: Proceedings of Neural Information Processing Systems: Natural and Synthetic (NIPS’01), pp. 553–560. Vancouver, BC, Canada (2001)

  6. Chapelle, O., Schölkopf, B., Zien, A.: Semi-Supervised Learning. MIT Press, Cambridge (2006)

    Google Scholar 

  7. Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley-Interscience, New York (1991)

    Book  MATH  Google Scholar 

  8. Domingo, C., Watanabe, O.: Madaboost: A modification of adaboost. In: Proceedings of the 13th Annual Conference on Computational Learning Theory (COLT’00), pp. 180–189. Stanford, CA, USA (2000)

  9. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55(1), 119–139 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  10. Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. Ann. Stat. 28(2), 337–407 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  11. Kullback, S.: Information Theory and Statistics. Wiley, New York (1959)

    MATH  Google Scholar 

  12. Mallapragada, P.K., Jin, R., Jain, A.K., Liu, Y.: Semiboost: boosting for semi-supervised learning. IEEE Trans. Pattern Anal. Mach. Intell. 31(11), 2000–2014 (2009)

    Article  Google Scholar 

  13. Newman, D.J., Hettich, S., Blake, C.L., Merz, C.J.: UCI Repository of Machine Learning Databases (1998). http://www.ics.uci.edu/~mlearn/MLRepository.html

  14. Roli, F.: Semi-supervised multiple classifier systems: background and research directions. In: Proceedings of the 6th International Workshop on Multiple Classifier Systems (MCS’05), pp. 1–11. Seaside, CA, USA (2005)

  15. Schapire, R.E.: The strength of weak learnability. Mach. Learn. 5(2), 197–227 (1990)

    Google Scholar 

  16. Shanbhag, S., Wolf, T.: Accurate anomaly detection through parallelism. IEEE Netw. 23(1), 22–28 (2009)

    Article  Google Scholar 

  17. Surowiecki, J.: The Wisdom of Crowds. Anchor Books, Warszawa (2005)

    Google Scholar 

  18. Uchida, M., Shioya, H.: A study on assignment of weight parameters in ensemble learning model. IEICE Trans. Inf. Syst., PT. 2 J86-D-II(7), 1131–1134 (2003). In Japanese

    Google Scholar 

  19. Uchida, M., Shioya, H., Da-te, T.: Analysis and extension of ensemble learning. IEICE Trans. Inf. Syst., PT. 2 J84-D-II(7), 1537–1542 (2001). In Japanese

    Google Scholar 

  20. Uchida, M., Maehara, Y., Shioya, H.: Design of an unsupervised weight parameter estimation method in ensemble learning. In: Proceedings of the 14th International Conference on Neural Information Processing (ICONIP 2007). Lecture Notes in Computer Science, vol. 4984, pp. 771–780. Springer, New York (2008)

    Google Scholar 

  21. Ueda, N., Nakano, R.: Generalization error of ensemble estimators. In: Proceedings of International Conference on Neural Networks 1996 (ICNN’96), vol. 3, pp. 90–95. Washington, D.C., WA, USA (1996)

  22. Zhou, Z.H.: When semi-supervised learning meets ensemble learning. In: Proceedings of the 8th International Workshop on Multiple Classifier Systems (MCS’09), pp. 529–538. Reykjavik, Iceland (2009)

    Chapter  Google Scholar 

  23. Zhu, X., Goldberg, A.B.: Introduction to Semi-Supervised Learning. Morgan and Claypool, San Rafael (2009)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Masato Uchida.

Additional information

A part of this paper appeared in the Post-Conference Proceedings of ICONIP 2007 [20]. This work was supported in part by the Japan Society for the Promotion of Science through a Grant-in-Aid for Scientific Research (S) (18100001).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Uchida, M., Maehara, Y. & Shioya, H. Unsupervised Weight Parameter Estimation Method for Ensemble Learning. J Math Model Algor 10, 307–322 (2011). https://doi.org/10.1007/s10852-011-9157-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10852-011-9157-1

Keywords

Mathematics Subject Classification (2010)

Navigation