Skip to main content

Bias Variance Decomposition

  • Reference work entry
  • First Online:
Encyclopedia of Machine Learning and Data Mining

Definition

The bias-variance decomposition is a useful theoretical tool to understand the performance characteristics of a learning algorithm. The following discussion is restricted to the use of squared loss as the performance measure, although similar analyses have been undertaken for other loss functions. The case receiving most attention is the zero-one loss (i.e., classification problems), in which case the decomposition is nonunique and a topic of active research. See Domingos (1992) for details.

The decomposition allows us to see that the mean squared error of a model (generated by a particular learning algorithm) is in fact made up of two components. The bias component tells us how accurate the model is, on average across different possible training sets. The variance component tells us how sensitive the learning algorithm is to small changes in the training set (Fig. 1).

Bias Variance Decomposition, Fig. 1
figure 25 figure 25

The bias-variance decomposition is like trying to hit the bullseye on a...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 699.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 949.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Domingos P (1992) A unified bias-variance decomposition for zero-one and squared loss. In: Proceedings of national conference on artificial intelligence. AAAI Press, Austin

    Google Scholar 

  • Geman S (1992) Neural networks and the bias/variance dilemma. Neural Comput 4(1):1–58

    Article  Google Scholar 

  • Moore DS, McCabe GP (2002) Introduction to the practice of statistics. Michelle Julet

    MATH  Google Scholar 

Download references

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Science+Business Media New York

About this entry

Cite this entry

(2017). Bias Variance Decomposition. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7687-1_74

Download citation

Publish with us

Policies and ethics