The bias-variance-covariance decomposition is a theoretical result underlying ensemble learning algorithms. It is an extension of the bias-variance decomposition, for linear combinations of models. The expected squared error of the ensemble \(\bar{f}(\mathbf{x})\) from a target d is:
The error is composed of the average bias of the models, plus a term involving their average variance, and a final term involving their average pairwise covariance. This shows that while a single model has a two-way bias-variance tradeoff, an ensemble is controlled by a three-way tradeoff. This ensemble tradeoff is often referred to as the accuracy-diversity dilemma for an ensemble. See ensemble learning for more details.
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer Science+Business Media, LLC
About this entry
Cite this entry
(2011). Bias-Variance-Covariance Decomposition. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-30164-8_77
Download citation
DOI: https://doi.org/10.1007/978-0-387-30164-8_77
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-387-30768-8
Online ISBN: 978-0-387-30164-8
eBook Packages: Computer ScienceReference Module Computer Science and Engineering