Abstract
In this paper, we present two ensemble learning algorithms which make use of boostrapping and out-of-bag estimation in an attempt to inherit the robustness of bagging to overfitting. As against bagging, with these algorithms learners have visibility on the other learners and cooperate to get diversity, a characteristic that has proved to be an issue of major concern to ensemble models. Experiments are provided using two regression problems obtained from UCI.
This work was supported in part by Research Grant Fondecyt (Chile) 1040365, 7060040 and 1070220. Partial support was also received from Research Grant 2407265 DGIP-UTFSM (Chile).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998)
Breiman, L.: Bagging predictors. Machine Learning 26(2), 123–140 (1996)
Breiman, L.: Out-of-bag estimation. Technical report, Statistics Department, University of California (1997)
Breiman, L.: Using iterated bagging to debias regressions. Machine Learning 45(3), 261–277 (2001)
Brown, G.: Diversity in Neural Network Ensembles. PhD thesis, School of Computer Science, University of Birmingham (2003)
Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. In: Advances in Neural Information Processing Systems, vol. 9, p. 155. The MIT Press, Cambridge (1997)
Efron, B., Tibshirani, R.: An Introduction to the Bootstrap. Chapman & Hall, Sydney (1993)
Elisseeff, A., Evgeniou, T., Pontil, M.: Stability of randomized learning algorithms. J. Machine Learning Research 6, 55–79 (2005)
Friedman, J.: Greedy function approximation: A gradient boosting machine. Annals of Statistics 29(5) (2001)
Friedman, J.: Stochastic gradient boosting. Computational Statistics and Data Analysis 38(4), 367–378 (2002)
Grandvalet, Y.: Bagging can stabilize without reducing variance. In: Dorffner, G., Bischof, H., Hornik, K. (eds.) ICANN 2001. LNCS, vol. 2130, pp. 49–56. Springer, Heidelberg (2001)
Grandvalet, Y.: Bagging equalizes influence. Machine Learning 55(3), 251–270 (2004)
Poggio, T., Rifkin, R., Mukherjee, S.: Bagging regularizes. Technical Report 214/AI Memo 2002-003, MIT CBCL (2002)
Ñanculef, R., Valle, C., Allende, H., Moraga, C.: Ensemble learning with local diversity. In: Kollias, S., Stafylopatis, A., Duch, W., Oja, E. (eds.) ICANN 2006. LNCS, vol. 4131, pp. 264–273. Springer, Heidelberg (2006)
Suen, Y., Melville, P., Mooney, R.: Combining bias and variance reduction techniques for regression. In: Proceedings of the 16th European Conference on Machine Learning, pp. 741–749 (2005)
Vlachos, P.: StatLib datasets archive (2005)
Yao, X., Lui, Y.: Ensemble learning via negative correlation. Neural Networks 12(10), 1399–1404 (1999)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Valle, C., Ñanculef, R., Allende, H., Moraga, C. (2007). Two Bagging Algorithms with Coupled Learners to Encourage Diversity. In: R. Berthold, M., Shawe-Taylor, J., Lavrač, N. (eds) Advances in Intelligent Data Analysis VII. IDA 2007. Lecture Notes in Computer Science, vol 4723. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74825-0_12
Download citation
DOI: https://doi.org/10.1007/978-3-540-74825-0_12
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-74824-3
Online ISBN: 978-3-540-74825-0
eBook Packages: Computer ScienceComputer Science (R0)