Abstract
We consider the estimation of a function in some ordered finite or infinite dictionary. We focus on the selected Lasso estimator introduced by Massart and Meynet (2011) as an adaptation of the Lasso suited to deal with infinite dictionaries. We use the oracle inequality established by Massart and Meynet (2011) to derive rates of convergence of this estimator on a wide range of function classes described by interpolation spaces such as in Barron et al. (2008). The results highlight that the selected Lasso estimator is adaptive to the smoothness of the function to be estimated, contrary to the classical Lasso or the greedy algorithm considered by Barron et al. (2008). Moreover, we prove that the rates of convergence of this estimator are optimal in the orthonormal case.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Barron, A., Cohen, A., Dahmen, W., DeVore, R.: Approximation and learning by greedy algorithms. Annals of Statistics 36(1), 64–94 (2008)
Bartlett, P., Mendelson, S., Neeman, J.: ℓ1-regularized linear regression: persistence and oracle inequalities. Probability Theory and Related Fields (2012)
Bickel, P., Ritov, Y., Tsybakov, A.: Simultaneous analysis of Lasso and Dantzig selector. Annals of Statistics 37(4), 1705–1732 (2009)
Birgé, L., Massart, P.: Gaussian model selection. Journal of the European Mathematical Society 3(3), 203–268 (2001)
Huang, C., Cheang, G., Barron, A.: Risk of penalized least squares, greedy selection and ℓ1-penalization for flexible function libraries. Submitted to the Annals of Statistics (2008)
Koltchinskii, V.: Sparsity in penalized empirical risk minimization. The Annals of Statistics 45(1), 7–57 (2009)
Massart, P., Meynet, C.: An ℓ1-oracle inequality for the Lasso. ArXiv 1007.4791 (2010)
Massart, P., Meynet, C.: The Lasso as an ℓ1-ball model selection procedure. Electronic Journal of Statistics 5, 669–687 (2011)
Rigollet, P., Tsybakov, A.: Exponential Screening and optimal rates of sparse estimation. The Annals of Statistics 39(2), 731–771 (2011)
Rivoirard, V.: Nonlinear estimation over weak Besov spaces and minimax Bayes method. Bernoulli 12(4), 609–632 (2006)
Tibshirani, R.: Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society. Series B 58, 267–288 (1996)
van de Geer, S.: High dimensional generalized linear models and the Lasso. The Annals of Statistics 36(2), 614–645 (2008)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Massart, P., Meynet, C. (2012). Some Rates of Convergence for the Selected Lasso Estimator. In: Bshouty, N.H., Stoltz, G., Vayatis, N., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2012. Lecture Notes in Computer Science(), vol 7568. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34106-9_4
Download citation
DOI: https://doi.org/10.1007/978-3-642-34106-9_4
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-34105-2
Online ISBN: 978-3-642-34106-9
eBook Packages: Computer ScienceComputer Science (R0)