Skip to main content

Bias—Variance Analysis and Ensembles of SVM

  • Conference paper
  • First Online:
Multiple Classifier Systems (MCS 2002)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2364))

Included in the following conference series:

Abstract

Accuracy, diversity, and learning characteristics of base learners critically influence the effectiveness of ensemble methods. Bias-variance decomposition of the error can be used as a tool to gain insights into the behavior of learning algorithms, in order to properly design ensemble methods well-tuned to the properties of a specific base learner. In this work we analyse bias-variance decomposition of the error in Support Vector Machines (SVM), characterizing it with respect to the kernel and its parameters. We show that the bias-variance decomposition offers a rationale to develop ensemble methods using SVMs as base learners, and we outline two directions for developing SVM ensembles, exploiting the SVM bias characteristics and the bias-variance dependence on the kernel parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. L. Breiman. Bias, variance and arcing classifiers. Technical Report TR 460, Statistics Department, University of California, Berkeley, CA, 1996.

    Google Scholar 

  2. S. Cohen and N. Intrator. Automatic Model Selection in a Hybrid Peceptron/Radial Network. In In J. Kittler and F. Roli (eds.) MCS 2001, Cambridge, UK, pages 349–358, 2001.

    Google Scholar 

  3. T.G. Dietterich. Ensemble methods in machine learning. In J. Kittler and F. Roli (eds.), MCS 2000, Cagliari, Italy, pages 1–15, 2000.

    Google Scholar 

  4. P. Domingos. A Unified Bias-Variance Decomposition for Zero-One and Squared Loss. In Proc. of the 17 th National Conference on Artificial Intelligence, pages 564–569, Austin, TX, 2000.

    Google Scholar 

  5. P. Domingos. A Unified Bias-Variance Decomposition and its Applications. In Proc. of the 17 th ICML, pages 231–238, Stanford, CA, 2000.

    Google Scholar 

  6. J.H. Friedman. On bias, variance, 0/1 loss and the curse of dimensionality. Data Mining and Knowledge Discovery, 1:55–77, 1997.

    Article  Google Scholar 

  7. S. Geman, E. Bienenstock, and R. Doursat. Neural networks and the bias-variance dilemma. Neural Computation, 4(1):1–58, 1992.

    Article  Google Scholar 

  8. T.K. Ho. Data Complexity Analysis for Classifiers Combination. In J. Kittler and F. Roli (eds.), MCS 2001, Cambridge, UK, pages 53–67, 2001.

    Google Scholar 

  9. T. Joachims. Making large scale SVM learning practical. In Advances in Kernel Methods-Support Vector Learning, pages 169–184. Cambridge, MA, 1999.

    Google Scholar 

  10. R. Kohavi and D.H. Wolpert. Bias plus variance decomposition for zero-one loss functions. In Proc. of the 13 th ICML, pages 275–283, Bari, Italy, 1996.

    Google Scholar 

  11. E. Kong and T.G. Dietterich. Error-correcting output coding correct bias and variance. In Proc. of the 12 th ICML, pages 313–321, San Francisco, CA, 1995.

    Google Scholar 

  12. L.I. Kuncheva, F. Roli, G.L. Marcialis, and C.A. Shipp. Complexity od Data Subsets Generated by the Random Subspace Method: An Experimental Investigation. In J. Kittler and F. Roli (eds.), MCS 2001, Cambridge, UK, pages 349–358, 2001.

    Google Scholar 

  13. C.J. Merz and P.M. Murphy. UCI repository of machine learning databases, 1998. http://www.ics.uci.edu/mlearn/MLRepository.html.

  14. R. Tibshirani. Bias, variance and prediction error for classification rules. Technical report, University of Toronto, Canada, 1996.

    Google Scholar 

  15. G. Valentini and F. Masulli. NEURObjects: an object-oriented library for neural network development. Neurocomputing. (in press).

    Google Scholar 

  16. V. Vapnik. Statistical Learning Theory. Wiley, New York, 1998.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Valentini, G., Dietterich, T.G. (2002). Bias—Variance Analysis and Ensembles of SVM. In: Roli, F., Kittler, J. (eds) Multiple Classifier Systems. MCS 2002. Lecture Notes in Computer Science, vol 2364. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45428-4_22

Download citation

  • DOI: https://doi.org/10.1007/3-540-45428-4_22

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43818-2

  • Online ISBN: 978-3-540-45428-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics