Definition
The bias-variance decomposition is a useful theoretical tool to understand the performance characteristics of a learning algorithm. The following discussion is restricted to the use of squared loss as the performance measure, although similar analyses have been undertaken for other loss functions. The case receiving most attention is the zero-one loss (i.e., classification problems), in which case the decomposition is nonunique and a topic of active research. See Domingos (1992) for details.
The decomposition allows us to see that the mean squared error of a model (generated by a particular learning algorithm) is in fact made up of two components. The bias component tells us how accurate the model is, on average across different possible training sets. The variance component tells us how sensitive the learning algorithm is to small changes in the training set (Fig. 1).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Domingos P (1992) A unified bias-variance decomposition for zero-one and squared loss. In: Proceedings of national conference on artificial intelligence. AAAI Press, Austin
Geman S (1992) Neural networks and the bias/variance dilemma. Neural Comput 4(1):1–58
Moore DS, McCabe GP (2002) Introduction to the practice of statistics. Michelle Julet
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Science+Business Media New York
About this entry
Cite this entry
(2017). Bias Variance Decomposition. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7687-1_74
Download citation
DOI: https://doi.org/10.1007/978-1-4899-7687-1_74
Published:
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4899-7685-7
Online ISBN: 978-1-4899-7687-1
eBook Packages: Computer ScienceReference Module Computer Science and Engineering