Skip to main content

Some Rates of Convergence for the Selected Lasso Estimator

  • Conference paper
Algorithmic Learning Theory (ALT 2012)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7568))

Included in the following conference series:

  • 2279 Accesses

Abstract

We consider the estimation of a function in some ordered finite or infinite dictionary. We focus on the selected Lasso estimator introduced by Massart and Meynet (2011) as an adaptation of the Lasso suited to deal with infinite dictionaries. We use the oracle inequality established by Massart and Meynet (2011) to derive rates of convergence of this estimator on a wide range of function classes described by interpolation spaces such as in Barron et al. (2008). The results highlight that the selected Lasso estimator is adaptive to the smoothness of the function to be estimated, contrary to the classical Lasso or the greedy algorithm considered by Barron et al. (2008). Moreover, we prove that the rates of convergence of this estimator are optimal in the orthonormal case.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Barron, A., Cohen, A., Dahmen, W., DeVore, R.: Approximation and learning by greedy algorithms. Annals of Statistics 36(1), 64–94 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  • Bartlett, P., Mendelson, S., Neeman, J.: ℓ1-regularized linear regression: persistence and oracle inequalities. Probability Theory and Related Fields (2012)

    Google Scholar 

  • Bickel, P., Ritov, Y., Tsybakov, A.: Simultaneous analysis of Lasso and Dantzig selector. Annals of Statistics 37(4), 1705–1732 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  • Birgé, L., Massart, P.: Gaussian model selection. Journal of the European Mathematical Society 3(3), 203–268 (2001)

    Article  MathSciNet  Google Scholar 

  • Huang, C., Cheang, G., Barron, A.: Risk of penalized least squares, greedy selection and ℓ1-penalization for flexible function libraries. Submitted to the Annals of Statistics (2008)

    Google Scholar 

  • Koltchinskii, V.: Sparsity in penalized empirical risk minimization. The Annals of Statistics 45(1), 7–57 (2009)

    MathSciNet  MATH  Google Scholar 

  • Massart, P., Meynet, C.: An ℓ1-oracle inequality for the Lasso. ArXiv 1007.4791 (2010)

    Google Scholar 

  • Massart, P., Meynet, C.: The Lasso as an ℓ1-ball model selection procedure. Electronic Journal of Statistics 5, 669–687 (2011)

    Article  MathSciNet  Google Scholar 

  • Rigollet, P., Tsybakov, A.: Exponential Screening and optimal rates of sparse estimation. The Annals of Statistics 39(2), 731–771 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  • Rivoirard, V.: Nonlinear estimation over weak Besov spaces and minimax Bayes method. Bernoulli 12(4), 609–632 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  • Tibshirani, R.: Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society. Series B 58, 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  • van de Geer, S.: High dimensional generalized linear models and the Lasso. The Annals of Statistics 36(2), 614–645 (2008)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Massart, P., Meynet, C. (2012). Some Rates of Convergence for the Selected Lasso Estimator. In: Bshouty, N.H., Stoltz, G., Vayatis, N., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2012. Lecture Notes in Computer Science(), vol 7568. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34106-9_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-34106-9_4

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-34105-2

  • Online ISBN: 978-3-642-34106-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics