Skip to main content
Log in

How to sample from a truncated distribution if you must

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

Sampling from a truncated distribution is difficult. There are currently two major methods proposed for solving this task. The first proposed solution is a random-walk MCMC algorithm. Although it eventually gives the correct distribution, it can be very slow in multi-modal distributions. The second approach called the ellipsoid method is practically more efficient for problems in which users have good prior information, but a correctness is not guaranteed. In this paper, we present a framework which can unify these two approaches. The key idea is to merge both methods into a single Markov chain using a trick called Metropolis-coupled MCMC. Once merged, they can validly exchange information to each other. Although the chain constructed from the ellipsoid approach cannot be proven to be correct, it usually rapidly converges to a useful stationary distribution, and its information can help the other chain constructed by the random-walk approach to converge faster to the correct distribution.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Adams TM, Nobel AB (1998) On density estimation from ergodic processes. Ann Probab 26(2): 794–804

    Article  MATH  MathSciNet  Google Scholar 

  • Bishop CM (2006) Pattern recognition and machine learning. Springer, New York

    Book  MATH  Google Scholar 

  • Devroye L (1986) Non-uniform random variate generation. Springer, New York

    MATH  Google Scholar 

  • Feroz F, Hobson M (2007) Multi-modal nested sampling Submitted to MNRAS. Available at: http://arxiv.org/abs/0704.3704

  • Gelfand AE, Sahu SK (1994) On Markov chain Monte Carlo acceleration. J Comput Graph Stat 3: 261–267

    Article  MathSciNet  Google Scholar 

  • Gelman A, Rubin DB (1992) Inference from iterative simulation using multiple sequence (with discussions). Stat Sci 7: 457–511

    Article  Google Scholar 

  • Geyer CJ (1991) Markov chain Monte Carlo maximum likelihood. In: Computing science and statistics: proceeding of the 23rd symposium on the interface, pp 156–163

  • Geyer CJ (1992) Practical markov chain monte carlo (with discussions). Stat Sci 7(4): 473–511

    Article  MathSciNet  Google Scholar 

  • Geyer CJ, Thompson EA (1995) Annealing Markov chain Monte Carlo with applications to ancestral inference. J Am Stat Assoc 90(431): 909–920

    Article  MATH  Google Scholar 

  • Gilks WR, Robert GO, Sahu SK (1998) Adaptive Markov chain Monte Carlo through regeneration. J Am Stat Assoc 93(443): 1045–1054

    Article  MATH  Google Scholar 

  • MacKay DJC (2003) Information theory, inference and learning algorithms. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  • Mengersen KL, Robert CP, Guihenneuc-Jouyaux C (1999) MCMC convergence diagnostics: a review. Bayesian Stat 6: 415–440

    MathSciNet  Google Scholar 

  • Mukherjee P, Parkinson D, Liddle AR (2006) A nested sampling algorithm for cosmological model selection. Astrophys J 638: L51–L54

    Article  Google Scholar 

  • Murray I, Ghahramani Z, MacKay DJC, Skilling J (2005) Nested sampling for Potts models. In: Neural information processing systems. Vol. 18

  • Neal RM (2003) Slice sampling (with discussions). Ann Stat 31: 705–767

    Article  MATH  MathSciNet  Google Scholar 

  • Pelleg D, Moore A (2000) X-means: extending K-means with efficient estimation of the number of clusters. In: Proceedings 17th international conference on machine learning, pp 727–734

  • Ramussen CE (2003) Gaussian processes to speed up hybrid Monte Carlo for expensive Bayesian integrals. In: Bernardo JM, Bayarri S, Berger JO, Dawid AP, Heckerman D, Smith AFM, West M (eds) Bayesian statistics 7. Oxford University Press, Oxford

    Google Scholar 

  • Sahu SK, Zhigljavsky AA (2003) Self regenerative Markov chain Monte Carlo with adaptation. Bernoulli 9: 395–422

    Article  MATH  MathSciNet  Google Scholar 

  • Shaw R, Bridges M, Hobson MP (2007) Clustered nested sampling: efficient Bayesian inference for cosmology. MNRAS (in press). Available at: http://lanl.arXiv.org/abs/astro-ph/0701867v1

  • Skilling J (2006) Nested sampling for general Bayesian computation. Bayesian Anal 1(4): 833–860

    MathSciNet  Google Scholar 

  • Welling M, Kurihara K (2006) Bayesian K-means as a “maximization-expectation” algorithm. In: Proceedings of the sixth SIAM international conference on data mining, April 20–22, 2006 Bethesda, MD, USA. SIAM

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ratthachat Chatpatanasiri.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Chatpatanasiri, R. How to sample from a truncated distribution if you must. Artif Intell Rev 31, 1 (2009). https://doi.org/10.1007/s10462-009-9121-x

Download citation

  • Published:

  • DOI: https://doi.org/10.1007/s10462-009-9121-x

Keywords

Navigation