Skip to main content

Expectation Propagation

  • Reference work entry
Encyclopedia of Machine Learning
  • 590 Accesses

Synonyms

EP

Definition

Expectation propagation is an algorithm for Bayesian machine learning (see Bayesian Methods). It tunes the parameters of a simpler approximate distribution (e.g., a Gaussian) to match the exact posterior distribution of the model parameters given the data. Expectation propagation operates by propagating messages, similar to the messages in (loopy) belief propagation (see Graphical Models). Whereas messages in belief propagation correspond to exact belief states, messages in expectation propagation correspond to approximations of the belief states in terms of expectations, such as means and variances. It is a deterministic method especially well-suited to large databases and dynamic systems, where exact methods for Bayesian inference fail and Monte Carlo methods are far too slow.

Motivation and Background

One of the main problems for Bayesian methodsare their computational expense: computation of the exact posterior, given the observed data, typically requires...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Recommended Reading

  • Csató, L. (2002). Gaussian processes – iterative sparse approximations. PhD thesis, Aston University, Birmingham, UK.

    Google Scholar 

  • Herbrich, R., & Graepel, T. (2006). TrueSkill: A Bayesian skill rating system. (Tech. Rep. No. MSR-TR-2006-80). Cambridge, UK: Microsoft Research.

    Google Scholar 

  • Heskes, T., Opper, M., Wiegerinck, W., Winther, O., & Zoeter, O. (2005). Approximate inference with expectation constraints. Journal of Statistical Mechanics: Theory and Experiment, 11, P11015-1–P11015-24.

    Google Scholar 

  • Heskes, T., & Zoeter, O. (2002). Expectation propagation for approximate inference in dynamic Bayesian networks. In A. Darwiche & N. Friedman (Eds.), Proceedings of the 18th conference on uncertainty in artificial intelligence (pp. 216–223). San Francisco: Morgan Kaufmann.

    Google Scholar 

  • Minka, T. (2001). A family of algorithms for approximate Bayesian inference. PhD thesis, Cambridge, MA: MIT.

    Google Scholar 

  • Minka, T. (2005). Divergence measures and message passing. (Tech. Rep. NO. MSR-TR-2005-173), Cambridge, UK: Microsoft Research.

    Google Scholar 

  • Minka, T., & Lafferty, J. (2002). Expectation-propogation for the generative aspect model. In A. Darwiche & N. Friedman (Eds.), Proceedings of the 18th conference on uncertainty in artificial intelligence (pp. 352–359). San Francisco: Morgan Kaufmann.

    Google Scholar 

  • Opper, M., & Winther, O. (2001). Tractable approximations for probabilistic models: The adaptive Thouless-Anderson-Palmer mean field approach. Physical Review Letters, 86, 3695–3699.

    Article  Google Scholar 

  • Seeger, M. (2005). Expectation propagation for exponential families (Tech. Rep.). Berkeley, CA: University of California.

    Google Scholar 

  • Welling, M., Minka, T., & Teh, Y. (2005). Structured region graphs: Morphing EP into GBP. In F. Bacchus & T. Jaakkola (Eds.), Proceedings of the 21st conference on uncertainty in artificial intelligence (UAI) (pp. 609–614). Arlington, VA: AUAI Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Science+Business Media, LLC

About this entry

Cite this entry

Heskes, T. (2011). Expectation Propagation. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-30164-8_290

Download citation

Publish with us

Policies and ethics