The bias-variance-covariance decomposition is a theoretical result underlying ensemble learning algorithms. It is an extension of the bias-variance decomposition, for linear combinations of models. The expected squared error of the ensemble \(\bar{f}(x)\) from a target d is:
The error is composed of the average bias of the models, plus a term involving their average variance, and a final term involving their average pairwise covariance. This shows that while a single model has a two-way bias-variance tradeoff, an ensemble is controlled by a three-way tradeoff. This ensemble tradeoff is often referred to as the accuracy-diversity dilemma for an ensemble. See ensemble learning for more details.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Science+Business Media New York
About this entry
Cite this entry
(2017). Bias-Variance-Covariance Decomposition. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7687-1_932
Download citation
DOI: https://doi.org/10.1007/978-1-4899-7687-1_932
Published:
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4899-7685-7
Online ISBN: 978-1-4899-7687-1
eBook Packages: Computer ScienceReference Module Computer Science and Engineering