Definition
The bias-variance decomposition is a useful theoretical tool to understand the performance characteristics of a learning algorithm. The following discussion is restricted to the use of squared loss as the performance measure, although similar analyses have been undertaken for other loss functions. The case receiving most attention is the zero-one loss (i.e., classification problems), in which case the decomposition is nonunique and a topic of active research. See Domingos (1992) for details.
The decomposition allows us to see that the mean squared error of a model (generated by a particular learning algorithm) is in fact made up of two components. The bias component tells us how accurate the model is, on average across different possible training sets. The variance component tells us how sensitive the learning algorithm is to small changes in the training set (Fig. 1).
Recommended Reading
Domingos, P. (1992). A unified bias-variance decomposition for zero-one and squared loss. In Proceedings of national conference on artificial intelligence. Austin, TX: AAAI Press.
Geman, S. (1992). Neural networks and the bias/variance dilemma. Neural Computation, 4(1)
Moore, D. S., & McCabe, G. P. (2002). Introduction to the practice of statistics. Michelle Julet
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer Science+Business Media, LLC
About this entry
Cite this entry
(2011). Bias Variance Decomposition. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-30164-8_74
Download citation
DOI: https://doi.org/10.1007/978-0-387-30164-8_74
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-387-30768-8
Online ISBN: 978-0-387-30164-8
eBook Packages: Computer ScienceReference Module Computer Science and Engineering