Skip to main content

On the Convergence of MDL Density Estimation

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3120))

Abstract

We present a general information exponential inequality that measures the statistical complexity of some deterministic and randomized density estimators. Using this inequality, we are able to improve classical results concerning the convergence of two-part code MDL in [1]. Moreover, we are able to derive clean finite-sample convergence bounds that are not obtainable using previous approaches.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barron, A., Cover, T.: Minimum complexity density estimation. IEEE Transactions on Information Theory 37, 1034–1054 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  2. Barron, A., Schervish, M.J., Wasserman, L.: The consistency of posterior distributions in nonparametric problems. Ann. Statist. 27(2), 536–561 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  3. Le Cam, L.: Convergence of estimates under dimensionality restrictions. The Annals of Statistics 1, 38–53 (1973)

    Article  MATH  MathSciNet  Google Scholar 

  4. Li, J.Q.: Estimation of Mixture Models. PhD thesis, The Department of Statistics. Yale University (1999)

    Google Scholar 

  5. Meir, R., Zhang, T.: Generalization error bounds for Bayesian mixture algorithms. Journal of Machine Learning Research 4, 839–860 (2003)

    Article  MathSciNet  Google Scholar 

  6. Rissanen, J.: Stochastic complexity and statistical inquiry. World Scientific, Singapore (1989)

    Google Scholar 

  7. Seeger, M.: PAC-Bayesian generalization error bounds for Gaussian process classification. JMLR 3, 233–269 (2002)

    Article  MathSciNet  Google Scholar 

  8. van de Geer, S.A.: Empirical Processes in M-estimation. Cambridge University Press, Cambridge (2000)

    Google Scholar 

  9. Yang, Y., Barron, A.: Information-theoretic determination of minimax rates of convergence. The Annals of Statistics 27, 1564–1599 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  10. Zhang, T.: Theoretical analysis of a class of randomized regularization methods. In: COLT 1999, pp. 156–163 (1999)

    Google Scholar 

  11. Zhang, T.: Learning bounds for a generalized family of Bayesian posterior distributions. In: NIPS 2003 (2004) (to appear)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zhang, T. (2004). On the Convergence of MDL Density Estimation. In: Shawe-Taylor, J., Singer, Y. (eds) Learning Theory. COLT 2004. Lecture Notes in Computer Science(), vol 3120. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-27819-1_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-27819-1_22

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-22282-8

  • Online ISBN: 978-3-540-27819-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics