Skip to main content

Bias-Variance-Covariance Decomposition

  • Reference work entry
Encyclopedia of Machine Learning
  • 523 Accesses

The bias-variance-covariance decomposition is a theoretical result underlying ensemble learning algorithms. It is an extension of the bias-variance decomposition, for linear combinations of models. The expected squared error of the ensemble \(\bar{f}(\mathbf{x})\) from a target d is:

$$\begin{array}{rcl}{\mathcal{E}}_{\mathcal{D}}\{{(\bar{f}(\mathbf{x}) - d)}^{2}\}& =&{ \overline{\mathrm{bias}}}^{2} + \frac{1}{T}\overline{\mathrm{var}} +{\Biggl (1 - \frac{1}{T}\Biggr )}\overline{\mathrm{covar}}\end{array}.$$

The error is composed of the average bias of the models, plus a term involving their average variance, and a final term involving their average pairwise covariance. This shows that while a single model has a two-way bias-variance tradeoff, an ensemble is controlled by a three-way tradeoff. This ensemble tradeoff is often referred to as the accuracy-diversity dilemma for an ensemble. See ensemble learning for more details.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Science+Business Media, LLC

About this entry

Cite this entry

(2011). Bias-Variance-Covariance Decomposition. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-30164-8_77

Download citation

Publish with us

Policies and ethics