Skip to main content

Adaptive \(L_p\) \((0<p<1)\) Regularization: Oracle Property and Applications

  • Conference paper
  • First Online:
  • 4671 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10634))

Abstract

In this paper, we propose adaptive \(L_p\) (\(0<p<1\)) estimators in sparse, high-dimensional, linear regression models when the number of covariates depends on the sample size. Other than the case of the number of covariates is smaller than the sample size, in this paper, we prove that under appropriate conditions, these adaptive \(L_p\) estimators possess the oracle property in the case that the number of covariates is much larger than the sample size. We present a series of experiments demonstrating the remarkable performance of this estimator with adaptive \(L_p\) regularization, in comparison with the \(L_{1}\) regularization, the adaptive \(L_{1}\) regularization, and non-adaptive \(L_{p}\) regularization with \(0<p<1\), and its broad applicability in variable selection, signal recovery and shape reconstruction.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Fan, J.Q., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96, 1348–1360 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  2. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  3. Candes, E.J., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure. Appl. Math. 59, 1207–1223 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  4. Donoho, D.L.: Neighbourly polytypes and the sparse solution of under-determined systems of linear equations. IEEE Trans. Inf. Theory (2005, to appear)

    Google Scholar 

  5. Donoho, D.L.: High-dimensional centrally symmetric polytypes with neighbour proportional to dimension. Discrete Comput. Geom. 35, 617–652 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  6. Zou, H.: The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 101, 1418–1429 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  7. Knight, K., Fu, W.: Asymptotics for Lasso-type estimators. Ann. Stat. 28, 1356–1378 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  8. Huang, J., Horowitz, J., Ma, S.: Asymptotic properties for bridge estimators in sparse high-dimensional regression models. Ann. Stat. 36, 587–613 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  9. He, X.N., Lu, W.L., Chen, T.P.: A note on adaptive \(L_p\) regularization. In: The 2012 International Joint Conference on Neural Networks (2012)

    Google Scholar 

  10. Meinshausen, N., Yu, B.: Lasso-type recovery of sparse representations for high-dimensional data. Ann. Stat. 37, 246–270 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  11. Huang, J., Ma, S., Zhang, C.: Adaptive lasso for sparse high dimensional regression models. Stat. Sinica 18, 1603–1618 (2008)

    MATH  MathSciNet  Google Scholar 

  12. Xu, Z.B., et al.: \(L_{1/2}\) regularization. SCIENCE CHINA-Inf. Sci. 53, 1159–1169 (2010)

    Article  MathSciNet  Google Scholar 

  13. Xu, Z.B., et al.: \(L_{1/2}\) regularization: a thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 23, 1013–1027 (2012)

    Article  Google Scholar 

  14. Vaart, A.W., Wellner, J.A.: Weak Convergence and Empirical Processes, pp. 16–28. Springer, New York (1996)

    MATH  Google Scholar 

  15. Marial, J.: SPArse Modeling Software: an optimization toolbox for solving various sparse estimation problems. http://spams-devel.gforge.inria.fr/

  16. Tibshirani, R.: Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Series B (Methodological) 58, 267–288 (1996)

    MATH  MathSciNet  Google Scholar 

  17. Peter, C., Grace, W.: Smoothing noisy data with spline functions. Numer. Math. 31, 377–403 (1978)

    Article  MATH  MathSciNet  Google Scholar 

  18. Chen, S., Donoho, D.L., Saunders, M.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20, 33–61 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  19. Candes, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  20. Candes, E.J., Tao, T.: Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 52, 5406–5425 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  21. Candes, E.J.: The restricted isometry property and its application for compressed sensing. C.R. Math. 346, 589–592 (2006)

    Article  MATH  Google Scholar 

  22. Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted \(\ell \)1 minimization. J. Fourier Anal. Appl. 14, 877–905 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  23. Candes, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51, 4203–4215 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  24. Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MATH  MathSciNet  Google Scholar 

  25. Wang, Z., et al.: Image quality assesment: From error visibility to structral similarity. IEEE Trans. Image Process. 13, 600–612 (2004)

    Article  Google Scholar 

  26. Wang, Z., Simoncelli, E.P.: Multiscale structural similarity for image quality assessment. In: Conference Record of the Thirty-Seventh Asilomar Conference on Signals, Systems and Computers, pp. 1398–1402 (2004)

    Google Scholar 

  27. Wang, Z., Bovik, A.C.: Mean squared error: love it or leave it? - A new look at signal fidelity measures. IEEE Signal Process. Mag. 26, 98–117 (2009)

    Article  Google Scholar 

Download references

Acknowledgments

This work is jointly supported by Natural Science Foundation of China (NSFC) under Grant No. 61673119 and the Shanghai Committee of Science and Technology, China under Grant No. 14DZ1118700.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yunxiao Shi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Shi, Y., He, X., Wu, H., Jin, ZX., Lu, W. (2017). Adaptive \(L_p\) \((0<p<1)\) Regularization: Oracle Property and Applications. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10634. Springer, Cham. https://doi.org/10.1007/978-3-319-70087-8_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70087-8_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70086-1

  • Online ISBN: 978-3-319-70087-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics