Skip to main content
Log in

Study on \(L_1\) over \(L_2\) Minimization for Nonnegative Signal Recovery

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

In this paper, we carry out a comprehensive study for the unconstrained \(L_1\) over \(L_2\) sparsity promoting model, widely used in the regime of coherent dictionaries for recovering nonnegative sparse signals. First, we prove the existence of global solutions. Second, we analyze the sparse property of any local minimizer of the \(L_{1}/L_{2}\) model. This property serves as a certificate to rule out the nonlocal minimizers. Third, we focus on algorithmic development on the unconstrained model with nonnegative constraint. We derive an analytical solution for the proximal operator of the \(L_{1} / L_{2}\) with nonnegative constraint. Then, we apply the alternating direction method of multipliers in a particular splitting way, referred to as ADMM\(_p^+\). We establish its global convergence to a d-stationary solution (sharpest stationary) by verifying the Lyapunov function with the Kurdyka-Łojasiewicz property instead of imposing. Extensive numerical simulations confirm the superiority of ADMM\(_p^+\) over the existing state-of-the-art methods in nonnegative sparse recovery.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Data Availibility

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Notes

  1. The original SOOT algorithm proposed in [29] aims to solve blind deconvolution, i.e., finding the unknown signal and blur simultaneously.

References

  1. Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math. Program. 146, 459–494 (2014)

    MathSciNet  MATH  Google Scholar 

  2. Bochnak, J., Coste, M., Roy, M.-F.: Real algebraic geometry, p. 36. Springer, Berlin (1998)

    Book  MATH  Google Scholar 

  3. Bolte, J., Daniilidis, A., Lewis, A.: The Łojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM J. Optim. 17, 1205–1223 (2007)

    Article  MATH  Google Scholar 

  4. Bredies, K., Lorenz, D.A., Reiterer, S.: Minimization of non-smooth, non-convex functionals by iterative thresholding. J. Optim. Theory Appl. 165, 78–112 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  5. Candes, E., Tao, T.: Decoding by linear programming. IEEE Trans. Inform. Theory 51, 4203–4215 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  6. Chartrand, R.: Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process Lett. 14, 707–710 (2007)

    Article  Google Scholar 

  7. Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comp. 20, 33–61 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  8. Clarke, F.H.: Optimization and nonsmooth analysis, vol. 5. Classical Applied Mathematics Society for Industrial and Applied Mathematics, Philadelphia (1990)

    Book  MATH  Google Scholar 

  9. Cohen, A., Dahmen, W., Devore, R.: Compressed sensing and best \(k\)-term approximation. J. Am. Math. Soc. 22, 211–231 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  10. Dong, H., Tao, M.: On the linear convergence to weak/standard D-stationary points of DCA-based algorithms for structured nonsmooth DC programming. J. Optim. Theory Appl. 189, 190–220 (2021)

    Article  MathSciNet  Google Scholar 

  11. Esser, E., Lou, Y., Xin, J.: A method for finding structured sparse solutions to nonnegative least squares problems with applications. SIAM J. Imag. Sci. 6, 2010–2046 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  12. Fannjiang, A., Liao, W.: Coherence-pattern-guided compressive sensing with unresolved grids. SIAM J. Imag. Sci. 5, 179–202 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  13. Finlayson-Pitts, B.: Unpublished data. Provided by Wingen, L. M. (2000)

  14. Gong, P., Zhang, C., Lu, Z., Huang, J.Z., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. JMLR Worksh. Conf. Proceed. 28, 37–45 (2013)

    Google Scholar 

  15. Hoffman, A.J.: On approximate solutions of systems of linear inequalities. J. Res. Nat. Bur. Stand. 49, 263–265 (1952)

    Article  MathSciNet  Google Scholar 

  16. Hong, M.Y., Luo, Z.Q., Razaviyayn, M.: Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems. SIAM J. Optim. 26, 337–364 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  17. Hoyer, P. O.: Non-negative sparse coding, In: Proceedings of IEEE Workshop on Neural Networks for Signal Processing, pp. 557–565 (2002)

  18. Hurley, N., Rickard, S.: Comparing measures of sparsity. IEEE Trans. Inform. Theory 55, 4723–4741 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  19. Ji, H., Li, J., Shen, Z., Wang, K.: Image deconvolution using a characterization of sharp images in wavelet domain. Appl. Comput. Harmon. Anal. 32, 295–304 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  20. Li, G.Y., Pong, T.K.: Global convergence of splitting methods for nonconvex composite optimization. SIAM J. Optim. 25, 2434–2460 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  21. Li, H., Lin, Z.: Accelerated proximal gradient methods for nonconvex programming. Adv. Neural. Inf. Process. Syst. 1, 379–387 (2015)

    Google Scholar 

  22. Li, J., So, A.M.-C., Ma, W.-K.: Understanding notions of stationarity in nonsmooth optimization: a guided tour of various constructions of subdifferential for nonsmooth functions. IEEE Signal Proc. Mag. 37, 18–31 (2020)

    Article  Google Scholar 

  23. Morup, M., Madsen, K. H., Hansen, L. K.: Approximate\(l_0\)constrained non-negative matrix and tensor factorization, In: ISCAS, pp. 1328–1331 (2008)

  24. Nakayama, S., Gotoh, J.Y.: On the superiority of PGMs to PDCAs in nonsmooth nonconvex sparse regression. Optim. Lett. 15, 2831–2860 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  25. Natarajan, B.K.: Sparse approximate solutions to linear systems. SIAM J. Comp. 24, 227–234 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  26. Nikolova, M.: Local strong homogeneity of a regularized estimator. SIAM J. Math. Anal. 61, 633–658 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  27. Pang, J.S., Razaviyayn, M., Alvarado, A.: Computing B-stationary points of nonsmooth DC programs. Math. Oper. Res. 42, 95–118 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  28. Rahimi, Y., Wang, C., Dong, H., Lou, Y.: A scale-invariant approach for sparse signal recovery. SIAM J. Sci. Comp. 41, A3649–A3672 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  29. Repetti, A., Pham, M.Q., Duval, L., Chouzenoux, E., Pesquet, J.C.: Euclid in a taxicab: sparse blind deconvolution with smoothed \({\ell _1}/{\ell _2}\) regularization. IEEE Signal Process Lett. 22, 539–543 (2015)

    Article  Google Scholar 

  30. Rockafellar, R.T., Wets, R.J.B.: Variational analysis. Springer, Berlin (1998)

    Book  MATH  Google Scholar 

  31. Tao, M.: Minimization of L\(_1\) over L\(_2\) for sparse signal recovery with convergence guarantee. SIAM J. Sci. Comp. 44, A770–A797 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  32. Tao, M., Li, J.N.: Error bound and isocost imply linear convergence of DCA-based algorithms to D-stationarity. J. Optim. Theory Appl. 197, 205–232 (2023)

    Article  MathSciNet  MATH  Google Scholar 

  33. Vavasis, S. A.: Derivation of compressive sensing theorems from the spherical section property, University of Waterloo, (2009)

  34. Wang, C., Yan, M., Rahimi, Y., Lou, Y.: Accelerated schemes for the L\(_1\)/L\(_2\) minimization. IEEE Trans. Signal Process. 68, 2660–2669 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  35. Wang, Y., Yin, W., Zeng, J.: Global convergence of ADMM in nonconvex nonsmooth optimization. J. Sci. Comp. 78, 1–35 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  36. Yin, P., Esser, E., Xin, J.: Ratio and difference of \( \ell _{1} \) and \( \ell _{2} \) norms and sparse representation with coherent dictionaries. Comm. Info. Systems 14, 87–109 (2014)

    Article  MathSciNet  Google Scholar 

  37. Zeng, L.Y., Yu, P.R., Pong, T.K.: Analysis and algorithms for some compressed sensing models based on L1/L2 minimization. SIAM J. Optim. 31, 1576–1603 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  38. Zeng, J.S., Yin, W.T., Zhou, D.X.: Moreau envelope augmented Lagrangian method for nonconvex optimization with linear constraints. J. Sci. Comp. 91, 61 (2022)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We appreciate Dr. Penghang Yin from the Department of Mathematics and Statistics of University at Albany providing us the codes of generating DOAS problems, Algorithm 3 in [36], and the SGPM.

Funding

Min Tao was partially supported by the Natural Science Foundation of China (No. 11971228) and Jiangsu University QingLan Project. The work of Xiao-Ping Zhang is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), Grant No. RGPIN-2020-04661.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Min Tao.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tao, M., Zhang, XP. Study on \(L_1\) over \(L_2\) Minimization for Nonnegative Signal Recovery. J Sci Comput 95, 94 (2023). https://doi.org/10.1007/s10915-023-02225-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-023-02225-2

Keywords

Mathematics Subject Classification

Navigation