Skip to main content

Bias Variance Decomposition

  • Reference work entry
Encyclopedia of Machine Learning

Definition

The bias-variance decomposition is a useful theoretical tool to understand the performance characteristics of a learning algorithm. The following discussion is restricted to the use of squared loss as the performance measure, although similar analyses have been undertaken for other loss functions. The case receiving most attention is the zero-one loss (i.e., classification problems), in which case the decomposition is nonunique and a topic of active research. See Domingos (1992) for details.

The decomposition allows us to see that the mean squared error of a model (generated by a particular learning algorithm) is in fact made up of two components. The bias component tells us how accurate the model is, on average across different possible training sets. The variance component tells us how sensitive the learning algorithm is to small changes in the training set (Fig. 1).

Bias Variance Decomposition. Figure 1
figure 1_74 figure 1_74

The bias-variance decomposition is like trying to hit the bullseye on...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Recommended Reading

  • Domingos, P. (1992). A unified bias-variance decomposition for zero-one and squared loss. In Proceedings of national conference on artificial intelligence. Austin, TX: AAAI Press.

    Google Scholar 

  • Geman, S. (1992). Neural networks and the bias/variance dilemma. Neural Computation, 4(1)

    Google Scholar 

  • Moore, D. S., & McCabe, G. P. (2002). Introduction to the practice of statistics. Michelle Julet

    Google Scholar 

Download references

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Science+Business Media, LLC

About this entry

Cite this entry

(2011). Bias Variance Decomposition. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-30164-8_74

Download citation

Publish with us

Policies and ethics