Abstract
Accuracy, diversity, and learning characteristics of base learners critically influence the effectiveness of ensemble methods. Bias-variance decomposition of the error can be used as a tool to gain insights into the behavior of learning algorithms, in order to properly design ensemble methods well-tuned to the properties of a specific base learner. In this work we analyse bias-variance decomposition of the error in Support Vector Machines (SVM), characterizing it with respect to the kernel and its parameters. We show that the bias-variance decomposition offers a rationale to develop ensemble methods using SVMs as base learners, and we outline two directions for developing SVM ensembles, exploiting the SVM bias characteristics and the bias-variance dependence on the kernel parameters.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
L. Breiman. Bias, variance and arcing classifiers. Technical Report TR 460, Statistics Department, University of California, Berkeley, CA, 1996.
S. Cohen and N. Intrator. Automatic Model Selection in a Hybrid Peceptron/Radial Network. In In J. Kittler and F. Roli (eds.) MCS 2001, Cambridge, UK, pages 349–358, 2001.
T.G. Dietterich. Ensemble methods in machine learning. In J. Kittler and F. Roli (eds.), MCS 2000, Cagliari, Italy, pages 1–15, 2000.
P. Domingos. A Unified Bias-Variance Decomposition for Zero-One and Squared Loss. In Proc. of the 17 th National Conference on Artificial Intelligence, pages 564–569, Austin, TX, 2000.
P. Domingos. A Unified Bias-Variance Decomposition and its Applications. In Proc. of the 17 th ICML, pages 231–238, Stanford, CA, 2000.
J.H. Friedman. On bias, variance, 0/1 loss and the curse of dimensionality. Data Mining and Knowledge Discovery, 1:55–77, 1997.
S. Geman, E. Bienenstock, and R. Doursat. Neural networks and the bias-variance dilemma. Neural Computation, 4(1):1–58, 1992.
T.K. Ho. Data Complexity Analysis for Classifiers Combination. In J. Kittler and F. Roli (eds.), MCS 2001, Cambridge, UK, pages 53–67, 2001.
T. Joachims. Making large scale SVM learning practical. In Advances in Kernel Methods-Support Vector Learning, pages 169–184. Cambridge, MA, 1999.
R. Kohavi and D.H. Wolpert. Bias plus variance decomposition for zero-one loss functions. In Proc. of the 13 th ICML, pages 275–283, Bari, Italy, 1996.
E. Kong and T.G. Dietterich. Error-correcting output coding correct bias and variance. In Proc. of the 12 th ICML, pages 313–321, San Francisco, CA, 1995.
L.I. Kuncheva, F. Roli, G.L. Marcialis, and C.A. Shipp. Complexity od Data Subsets Generated by the Random Subspace Method: An Experimental Investigation. In J. Kittler and F. Roli (eds.), MCS 2001, Cambridge, UK, pages 349–358, 2001.
C.J. Merz and P.M. Murphy. UCI repository of machine learning databases, 1998. http://www.ics.uci.edu/mlearn/MLRepository.html.
R. Tibshirani. Bias, variance and prediction error for classification rules. Technical report, University of Toronto, Canada, 1996.
G. Valentini and F. Masulli. NEURObjects: an object-oriented library for neural network development. Neurocomputing. (in press).
V. Vapnik. Statistical Learning Theory. Wiley, New York, 1998.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Valentini, G., Dietterich, T.G. (2002). Bias—Variance Analysis and Ensembles of SVM. In: Roli, F., Kittler, J. (eds) Multiple Classifier Systems. MCS 2002. Lecture Notes in Computer Science, vol 2364. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45428-4_22
Download citation
DOI: https://doi.org/10.1007/3-540-45428-4_22
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-43818-2
Online ISBN: 978-3-540-45428-1
eBook Packages: Springer Book Archive