Skip to main content

Approximating Message Lengths of Hierarchical Bayesian Models Using Posterior Sampling

  • Conference paper
  • First Online:
AI 2016: Advances in Artificial Intelligence (AI 2016)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9992))

Included in the following conference series:

Abstract

Inference of complex hierarchical models is an increasingly common problem in modern Bayesian data analysis. Unfortunately, there are few computationally efficient and widely applicable methods for selecting between competing hierarchical models. In this paper we adapt ideas from the information theoretic minimum message length principle and propose a powerful yet simple model selection criteria for general hierarchical Bayesian models called MML-h. Computation of this criterion requires only that a set of samples from the posterior distribution be available. The flexibility of this new algorithm is demonstrated by a novel application to state-of-the-art Bayesian hierarchical regression estimation. Simulations show that the MML-h criterion is able to adaptively select between classic ridge regression and sparse horseshoe regression estimators, and the resulting procedure exhibits excellent robustness to the underlying structure of the regression coefficients.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  1. Wallace, C.S.: Statistical and Inductive Inference by Minimum Message Length. Information Science and Statistics, 1st edn. Springer, New York (2005)

    MATH  Google Scholar 

  2. Wallace, C.S., Freeman, P.R.: Estimation and inference by compact coding. J. R. Stat. Soc. (Ser. B) 49(3), 240–252 (1987)

    MathSciNet  MATH  Google Scholar 

  3. Carvalho, C.M., Polson, N.G., Scott, J.G.: The horseshoe estimator for sparse signals. Biometrika 97(2), 465–480 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  4. Makalic, E., Schmidt, D.F.: Minimum message length shrinkage estimation. Stat. Prob. Lett. 79(9), 1155–1161 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  5. Chib, S.: Marginal likelihood from the Gibbs output. J. Am. Stat. Assoc. 90(432), 1313–1321 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  6. Lewis, S.M., Raftery, A.E.: Estimating Bayes factors via posterior simulation with the Laplace-Metropolis estimator. J. Am. Stat. Assoc. 92, 648–655 (1997)

    MathSciNet  MATH  Google Scholar 

  7. Fitzgibbon, L.J., Dowe, D.L., Allison, L.: Univariate polynomial inference by Monte Carlo message length approximation. In: Proceedings of the Nineteenth International Conference on Machine Learning (ICML 2002), pp. 147–154 (2002)

    Google Scholar 

  8. Makalic, E., Schmidt, D.F.: A simple sampler for the horseshoe estimator. IEEE Signal Process. Lett. 23(1), 179–182 (2016)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel F. Schmidt .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Schmidt, D.F., Makalic, E., Hopper, J.L. (2016). Approximating Message Lengths of Hierarchical Bayesian Models Using Posterior Sampling. In: Kang, B.H., Bai, Q. (eds) AI 2016: Advances in Artificial Intelligence. AI 2016. Lecture Notes in Computer Science(), vol 9992. Springer, Cham. https://doi.org/10.1007/978-3-319-50127-7_41

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-50127-7_41

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-50126-0

  • Online ISBN: 978-3-319-50127-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics