Skip to main content

Self-poised Ensemble Learning

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 3646))

Abstract

This paper proposes a new approach to train ensembles of learning machines in a regression context. At each iteration a new learner is added to compensate the error made by the previous learner in the prediction of its training patterns. The algorithm operates directly over values to be predicted by the next machine to retain the ensemble in the target hypothesis and to ensure diversity. We expose a theoretical explanation which clarifies what the method is doing algorithmically and allows to show its stochastic convergence. Finally, experimental results are presented to compare the performance of this algorithm with boosting and bagging in two well-known data sets.

This work was supported in part by Research Grant Fondecyt (Chile) 1040365 and 7040051, and in part by Research Grant DGIP-UTFSM (Chile). Partial support was also received from Research Grant BMBF (Germany) CHL 03-Z13.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Vedelsby., J., Krogh, A.: Neural network ensembles, cross-validation and active learning. Neural Information Processing Systems 7, 231–238 (1995)

    Google Scholar 

  2. Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998)

    Google Scholar 

  3. Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)

    MATH  MathSciNet  Google Scholar 

  4. Drucker, H.: Improving regressors using boosting techniques. In: Fourteenth International Conference on Machine Learning, pp. 107–115 (1997)

    Google Scholar 

  5. Harris., R., Brown., G., Wyatt, J., Yao, X.: Diversity creation methods: A survey and categorisation. Information Fusion Journal (Special issue on Diversity in Multiple Classifier Systems) 6(1), 5–20 (2004)

    Google Scholar 

  6. Whitaker, C., Kuncheva, L.: Measures of diversity in classifier ensembles. Machine Learning 51, 181–207 (2003)

    Article  MATH  Google Scholar 

  7. Hand, D., Berthold, M. (eds.): Intelligent data analysis, 2nd edn. Springer, Heidelberg (2003)

    Google Scholar 

  8. Meir, R., Rätsch, G.: An introduction to boosting and leveraging. In: Advanced lectures on machine learning, pp. 118–183. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  9. Mitchell, T. (ed.): Machine learning, 1st edn. Mc Graw-Hill (1997)

    Google Scholar 

  10. Nakano, R., Ueda, N.: Generalization error of ensemble estimators. In: Proceedings of International Conference on Neural Networks, pp. 90–95 (1996)

    Google Scholar 

  11. Prechelt, L.: Proben1 - a set of benchmarks and benchmarking rules for neural training algorithms, Tech. Report 21/94, Fakultat fur Informatik, Universitat Karlsruhe, D-76128 Karlsruhe, Germany (1994)

    Google Scholar 

  12. Valle, C., Ñanculef, R.: Self-poised ensemble learning, Tech. Report 2005/01, Departamento de Informática, Universidad Federico Santa María, CP 110-V, Valparaíso, Chile (2005)

    Google Scholar 

  13. Opitz., D., Maclin, R.: An empirical evaluation of bagging and boosting. In: AAAI/IAAI, pp. 546–551 (1997)

    Google Scholar 

  14. Rosen, B.: Ensemble learning used decorrelated neural networks. Connection Science (Special Issue on Combining Artificial Neural Networks: Ensemble Approaches) 8(3-4), 373–384 (1999)

    Google Scholar 

  15. Schapire, R.: The stregth of weak learnability. Machine Learning 5, 197–227 (1990)

    Google Scholar 

  16. Schapire, R., Freud, Y.: A decision-theoretic generalization of on-line learning and application to boosting. Journal of Computer and System Sciences 55(1), 119–137 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  17. Yao., X., Lui, Y.: Ensemble learning via negative correlation. Neural Networks 12(10), 1399–1404 (1999)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ñanculef, R., Valle, C., Allende, H., Moraga, C. (2005). Self-poised Ensemble Learning. In: Famili, A.F., Kok, J.N., Peña, J.M., Siebes, A., Feelders, A. (eds) Advances in Intelligent Data Analysis VI. IDA 2005. Lecture Notes in Computer Science, vol 3646. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11552253_25

Download citation

  • DOI: https://doi.org/10.1007/11552253_25

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-28795-7

  • Online ISBN: 978-3-540-31926-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics