Skip to main content

Localized Upper and Lower Bounds for Some Estimation Problems

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3559))

Abstract

We derive upper and lower bounds for some statistical estimation problems. The upper bounds are established for the Gibbs algorithm. The lower bounds, applicable for all statistical estimators, match the obtained upper bounds for various problems. Moreover, our framework can be regarded as a natural generalization of the standard minimax framework, in that we allow the performance of the estimator to vary for different possible underlying distributions according to a pre-defined prior.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Birgé, L., Massart, P.: Rates of convergence for minimum contrast estimators. Probab. Theory Related Fields 97(1-2), 113–150 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  2. Blahut, R.E.: Information bounds of the Fano-Kullback type. IEEE Transactions on Information Theory 22, 410–421 (1976)

    Article  MATH  MathSciNet  Google Scholar 

  3. Olivier Catoni. A PAC-Bayesian approach to adaptive classification. Available online, at http://www.proba.jussieu.fr/users/catoni/homepage/classif.pdf

  4. Han, T.S., Verdú, S.: Generalizing the Fano inequality. IEEE Transactions on Information Theory 40, 1247–1251 (1994)

    Article  MATH  Google Scholar 

  5. van de Geer, S.A.: Empirical Processes in M-estimation. Cambridge University Press, Cambridge (2000)

    Google Scholar 

  6. van der Vaart, A.W., Wellner, J.A.: Weak convergence and empirical processes. Springer Series in Statistics. Springer, New York (1996)

    MATH  Google Scholar 

  7. Yang, Y., Barron, A.: Information-theoretic determination of minimax rates of convergence. The Annals of Statistics 27, 1564–1599 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  8. Zhang, T.: Learning bounds for a generalized family of Bayesian posterior distributions. In: NIPS 2003 (2004)

    Google Scholar 

  9. Zhang, T.: On the convergence of MDL density estimation. In: Shawe-Taylor, J., Singer, Y. (eds.) COLT 2004. LNCS (LNAI), vol. 3120, pp. 315–330. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zhang, T. (2005). Localized Upper and Lower Bounds for Some Estimation Problems. In: Auer, P., Meir, R. (eds) Learning Theory. COLT 2005. Lecture Notes in Computer Science(), vol 3559. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11503415_35

Download citation

  • DOI: https://doi.org/10.1007/11503415_35

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-26556-6

  • Online ISBN: 978-3-540-31892-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics