Skip to main content
Log in

A Smoothing Inertial Neural Network for Sparse Signal Reconstruction with Noise Measurements via \(L_p\)-\(L_1\) minimization

  • Published:
Circuits, Systems, and Signal Processing Aims and scope Submit manuscript

Abstract

In this paper, a smoothing inertial neural network (SINN) is proposed for the \(L_{p}\)-\(L_{1}\) \((1\ge p>0)\) minimization problem, in which the objective function is non-smooth, non-convex, and non-Lipschitz. First, based on the smooth approximation technique, the objective function can be transformed into a smooth optimization problem, effectively solving the \(L_{p}\)-\(L_{1}\) \((1\ge p>0)\) minimization model with non-smooth terms. Second, the Lipschitz property of the gradient that is smooth objective function is discussed. Then through theoretical analysis, the existence and uniqueness of the solution is discussed under the condition of restricted isometric property (RIP), and it is proved that the proposed SINN converges to the optimal solution of the minimization problem. Finally, the effectiveness and superiority of the proposed SINN are verified by the successful recovery performance under different pulse noise levels.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. A. Beck, M. Teboulle, A fast iterative shrinkage thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  2. S. Becker, J. Bobin, E. Candes, Nesta: a fast and accurate first-order method for sparse recovery. SIAM J. Imaging Sci. 4(1), 1–39 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  3. W. Bian, X. Chen, Smoothing neural network for constrained nonlipschitz optimization with applications. IEEE Trans. Neural Networks Learn. Syst. 23(3), 399–411 (2012)

    Article  MathSciNet  Google Scholar 

  4. W. Bian, X. Chen, Neural network for nonsmooth, nonconvex constrained minimization via smooth approximation. IEEE Trans. Neural Networks Learn. Syst. 25(3), 545–556 (2013)

    Article  Google Scholar 

  5. W. Bian, X. Xue, Subgradient-based neural networks for nonsmooth nonconvex optimization problems. IEEE Trans. Neural Networks. 20(6), 1024–1038 (2009)

    Article  Google Scholar 

  6. S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2010)

    Article  MATH  Google Scholar 

  7. E. Candes, The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique. 346(9), 589–592 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  8. E. Candes, J. Romberg, Sparsity and incoherence in compressive sampling. Inverse Prob. 23(3), 969–985 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  9. S. Chen, D. Donoho, M. Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  10. X. Chen, Smoothing methods for nonsmooth, nonconvex minimization. Math. Program. 134(1), 71–99 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  11. F. Clarke, Optimization and nonsmooth analysis. Soc. Ind. Appl. Math. (1990). https://epubs.siam.org/doi/abs/10.1137/1.9781611971309

  12. E. Elhamifar, R. Vidal, Sparse subspace clustering: algorithm, theory, and applications. IEEE Trans. Pattern Anal. Mach. Intel. 35(11), 2765–2781 (2012)

    Article  Google Scholar 

  13. E. Esser, Y. Lou, J. Xin, A method for finding structured sparse solutions to non-negative least squares problems with applications. SIAM J. Imaging Sci. 6(4), 2010–2046 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  14. X. He, H. Wen, T. Huang, A fixed-time projection neural network for solving \(l_1\) minimization problem. IEEE Trans. Neural Networks Learn. Syst. (2021). https://doi.org/10.1109/TNNLS.2021.3088535

    Google Scholar 

  15. C. Hu, Y. Liu, G. Li, X. Wang, Improved focuss method for reconstruction of cluster structured sparse signals in radar imaging. Sci. China (Inf. Sci.) 55(8), 1776–1788 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  16. N. Jason, K. Sami, F. Marco, S. Tamer, Theory and implementation of an analog-to-information converter using random demodulation. In: IEEE International Symposium on Circuits and Systems, pp.1959-1962 (2007)

  17. W. John, Y. Allen, G. Arvind, Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intel. 31(2), 210–227 (2009)

    Article  Google Scholar 

  18. Y. Li, A. Cichocki, S. Amari, Analysis of sparse representation and blind source separation. MIT Press. 16(6), 1193–1234 (2004)

    MATH  Google Scholar 

  19. Y. Li, A. Cichocki, S. Amari, Blind estimation of channel parameters and source components for eeg signals:a sparse factorization approach. IEEE Trans. Neural Networks. 17(2), 419–431 (2006)

    Article  Google Scholar 

  20. J. Mairal, F. Bach, J. Ponce, G. Sapiro, A. Zisserman, Non-local sparse models for image restoration. In: 2009 IEEE 12th International Conference on Computer Vision (ICCV). pp. 2272–2279 (2010)

  21. B. Natarajan, Sparse approximate solutions to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  22. S. Qin, X. Yang, X. Xue, J. Song, A one-layer recurrent neural network for pseudoconvex optimization problems with equality and inequality constraints. IEEE Trans. Cybern. 47(10), 3063–3074 (2017)

    Article  Google Scholar 

  23. Y. Tian, Z. Wang, Stochastic stability of markovian neural networks with generally hybrid transition rates. IEEE Trans. Neural Networks Learn. Syst. (2021). https://doi.org/10.1109/TNNLS.2021.3084925

    Article  Google Scholar 

  24. Y. Tian, Z. Wang, Extended dissipativity analysis for markovian jump neural networks via double-integral-based delay-product-type lyapunov functional. IEEE Trans. Neural Networks Learn. Syst. 32(7), 3240–3246 (2021)

    Article  MathSciNet  Google Scholar 

  25. R. Tibshirani, Regression shrinkage and selection via the lasso. J. Royal Statist. Soc.. Ser. B: Methodol. 58(1), 267–288 (1996)

  26. J. Tropp, Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory. 50(10), 2231–2242 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  27. D. Wang, Z. Zhang, Generalized sparse recovery model and its neural dynamical optimization method for compressed sensing. Circ. Syst. Signal Process. 36, 4326–4353 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  28. D. Wang, Z. Zhang, KKT condition-based smoothing recurrent neural network for nonsmooth nonconvex optimization in compressed sensing. Neural Comput. Appl. 31, 2905–2920 (2019)

    Article  Google Scholar 

  29. Y. Wang, G. Zhou, L. Caccetta, W. Liu, An alternative lagrange-dual based algorithm for sparse signal reconstruction. IEEE Trans. Signal Process. 59(4), 1895–1901 (2011)

    Article  Google Scholar 

  30. F. Wen, P. Liu, Y. Liu, R. Qiu, W. Yu, Robust sparse recovery in impulsive noise via \(l_1\)-\(l_p\) optimization. IEEE Trans. Signal Process. 65(1), 105–118 (2017)

    MathSciNet  Google Scholar 

  31. H. Wen, H. Wang, X. He, A neurodynamic algorithm for sparse signal reconstruction with finite-time convergence. Circ. Syst. Signal Process. 39(12), 6058–6072 (2020)

    Article  Google Scholar 

  32. Y. Xiao, H. Zhu, S. Wu, Primal and dual alternating direction algorithms for \(l_1\)-\(l_1\)-norm minimization problems in compressive sensing. Comput. Optim. Appl. 54(2), 441–459 (2013)

    MathSciNet  MATH  Google Scholar 

  33. Y. Zhao, X. He, T. Huang, J. Huang, Smoothing inertial projection neural network for minimization \(l_{p-q}\) in sparse signal reconstruction. Neural Networks. 99, 31–41 (2018)

    MATH  Google Scholar 

  34. Y. Zhao, X. He, T. Huang, J. Huang, P. Li, A smoothing neural network for minimization \(l_1\)-\(l_p\)in sparse signal reconstruction with measurement noises. Neural Networks. 122, 40–53 (2020)

    MATH  Google Scholar 

  35. Y. Zhao, X. Liao, X. He, R. Tang, Centralized and collective neurodynamic optimization approaches for sparse signal reconstruction via \(l_1\)-minimization. IEEE Trans. Neural Networks Learn. Syst. (2021). https://doi.org/10.1109/TNNLS.2021.3085314

    Google Scholar 

  36. J. Zhu, S. Rosset, T. Hastie, R. Tibshirani, 1-norm support vector machines. Adv. Neural Inf. Process. Syst. 16(1), 16 (2003)

    Google Scholar 

Download references

Acknowledgements

This work is supported by Natural Science Foundation of China (Grant nos: 61773320), Fundamental Research Funds for the Central Universities (Grant no. XDJK2020TY003), and also supported by the Natural Science Foundation Project of Chongqing CSTC (Grant no. cstc2018jcyjAX0583).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xing He.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A

Appendix A

Proof of Theorem 2

We take \(x^*\in \Omega \) and a stationary point is defined as (19), and consider the Lyapunov function \(L(t)=\frac{1}{2}\Vert x(t)-x^*\Vert ^2\). Then, the following two equations hold:

$$\begin{aligned} \dot{L}(t)=(x(t)-x^*)^T\dot{x}(t),\ddot{L}(t)=(\dot{x}(t))^T\dot{x}(t)+(x(t)-x^*)^T\ddot{x}(t). \end{aligned}$$
(44)

and so we can obtain

$$\begin{aligned} \ddot{L}(t)+\lambda \dot{L}(t)=\Vert \dot{x}(t)\Vert ^2+\langle x(t)-x^*,\ddot{x}(t)+\lambda \dot{x}(t)\rangle . \end{aligned}$$
(45)

and calculate with (15), it is not hard to get

$$\begin{aligned} \begin{aligned} \ddot{x}(t)+\lambda \dot{x}(t)&=-\nabla \hat{f}(x(t),\varepsilon (t))\\&=-\mathbb {R}(x(t),\varepsilon (t)). \end{aligned} \end{aligned}$$
(46)

Then, (46) become

$$\begin{aligned} \Vert \dot{x}(t)\Vert ^2=\ddot{L}(t)+\lambda \dot{L}(t)+\langle x(t)-x^*,\mathbb {R}(x(t),\varepsilon (t))\rangle \end{aligned}$$
(47)

Since \(A\mathbb {R}(x^*,\varepsilon (t))=0\), then

$$\begin{aligned} \begin{aligned} \Vert \dot{x}(t)\Vert ^2=\ddot{L}(t)+\lambda \dot{L}(t) +\langle x(t)-x^*,\mathbb {R}(x(t),\varepsilon (t)-A\mathbb {R}(x^*,\varepsilon (t)))\rangle . \end{aligned} \end{aligned}$$
(48)

From condition (ii), it follows that

$$\begin{aligned} \Vert \dot{x}(t)\Vert ^2\ge \ddot{L}(t)+\lambda \dot{L}(t)+\sigma \Vert \mathbb {R}(x(t),\varepsilon (t))\Vert ^2. \end{aligned}$$
(49)

Combining (48) and (49), we obtain

$$\begin{aligned} \Vert \dot{x}(t)\Vert ^2\ge \ddot{L}(t)+\lambda \dot{L}(t)+\sigma \Vert \ddot{x}(t)+\lambda \dot{x}(t)\Vert ^2. \end{aligned}$$
(50)

Therefore, (50) can be converted to

$$\begin{aligned} \begin{aligned} \ddot{L}(t)+\lambda \dot{L}(t)+\sigma \Vert \ddot{x}(t)\Vert ^2+\sigma \lambda \frac{d\Vert \dot{x}(t)\Vert ^2}{dt} +(\sigma \lambda ^2-1)\Vert y(t)\Vert ^2\le 0 \end{aligned} \end{aligned}$$
(51)

we can get it from (51)

$$\begin{aligned} \begin{aligned} V(t)&=\dot{L}(t)+\lambda \ L(t)\\&+\sigma \lambda \Vert \dot{x}(t)\Vert ^2+\int _{0}^t\sigma \Vert \ddot{x}(t)\Vert ^2ds+(\sigma \lambda ^2-1)\int _{0}^t\Vert \dot{x}(s)\Vert ^2ds \end{aligned} \end{aligned}$$
(52)

From (51), we know V(t) is a monotone nonincreasing function, so we obtain

$$\begin{aligned} \begin{aligned} V(t)&\le \dot{L}(0)+\lambda L(0)+\sigma \lambda \Vert \dot{x}(0)\Vert ^2\\&=\Vert x(0)-x^*\Vert \dot{x}(0)+\frac{\lambda }{2}\Vert x(0)-x^*\Vert +\sigma \lambda \Vert \dot{x}(0)\Vert ^2\\&=H_{h_{0}},t>0 \end{aligned} \end{aligned}$$
(53)

Then, it can be obtained from condition (c)

$$\begin{aligned} \dot{L}(t)+\lambda L(t)\le H_{h_{0}} \end{aligned}$$
(54)

Next, multiply the two sides of the equation by \(\mathrm{exp}(\lambda t)\) and integrate from 0 to t

$$\begin{aligned} \begin{aligned} \left( \dot{L}(t)+\lambda L(t)\right) \mathrm{exp}(\lambda t)\le H_{h_{0}}\mathrm{exp}(\lambda t)\\ \frac{d}{dt}\left( L(t)\mathrm{exp}(\lambda t)\right) \le H_{h_{0}}\mathrm{exp}(\lambda t)\\ L(t)\le \frac{H_{h_{0}}}{\lambda }+L(0)\mathrm{exp}(-\lambda t) \end{aligned} \end{aligned}$$
(55)

where \(h_{0}=0\), so the trajectory of the algorithm and L(t) are bounded.

And then from the inequality (53), we can get

$$\begin{aligned} \dot{L}(t)+\sigma \lambda \Vert \dot{x}(t)\Vert ^2=\langle x(t)-x^*,\dot{x}(t)\rangle +\sigma \lambda \Vert \dot{x}(t)\Vert ^2\le H_{h_{0}} \end{aligned}$$
(56)

That is, we can get that \(\Vert \dot{x}(t)\Vert \) is also bounded, then the following conditions can be obtained by (49) and (53) : \(\int _{0}^{+\infty }\Vert \dot{x}(r)\Vert ^2dr=A<+\infty ,\ \Vert \ddot{x}(r)\Vert dr\le B\) where t is at \([0,+\infty ),\ A,\ B>0\), therefore

$$\begin{aligned} \begin{aligned} \int _{0}^{+\infty }\Vert \dot{x}(r)\Vert ^2\Vert \ddot{x}(r)\Vert dr&=\frac{1}{3}\lim \limits _{t\rightarrow \infty }\left( \Vert \dot{x}(t)^3\Vert -\Vert \dot{x}(0)\Vert \right) \\&\le A\int _{0}^{+\infty }\Vert \ddot{x}(r)\Vert dr<+\infty \end{aligned} \end{aligned}$$
(57)

Thus, the limit of \(\Vert \dot{x}(t)\Vert ^3\) exists. Furthermore, it can be obtained that the limit of \(\lim \limits _{t\rightarrow \infty }\Vert \dot{x}(t)\Vert \) also exists, since \(\int _{0}^{+\infty }\Vert \dot{x}(r)\Vert ^2dr=A<+\infty \) and L(t) are bounded, we can obtain \(\lim \limits _{t\rightarrow \infty }\Vert \dot{x}(t)\Vert =0\), and further obtain \(\lim \limits _{t\rightarrow \infty }\dot{x}(t)=0\).

Afterward, we give the proof of \(\lim \limits _{t\rightarrow \infty }\Vert \ddot{x}(t)\Vert =0\).

Define \(h_{\upsilon }(t)=(\frac{1}{\upsilon })(\dot{x}(t+\upsilon )-\dot{x}(t))\), and available by simple calculations:

$$\begin{aligned} \begin{array}{lr} \dot{h}_{\upsilon }(t)+\lambda \dot{h}_{\upsilon }(t)= -\frac{1}{\upsilon }(\mathbb {R}(x(t+\upsilon ),\varepsilon (t+\upsilon ))-\mathbb {R}(x(t),\varepsilon (t))) \end{array} \end{aligned}$$
(58)

Then, according to the condition (a), we can get

$$\begin{aligned} \begin{array}{lr} \Vert \mathbb {R}(x(t+\upsilon ),\varepsilon (t+\upsilon ))-\mathbb {R}(x(t),\varepsilon (t))\Vert &{}\\ \le \Vert \mathbb {R}(x(t+\upsilon ),\varepsilon (t+\upsilon ))-\mathbb {R}(x(t),\varepsilon (t+\upsilon ))\Vert &{}\\ \quad +\Vert \mathbb {R}(x(t),\varepsilon (t+\upsilon ))-\mathbb {R}(x(t),\varepsilon (t))\Vert &{}\\ \le \frac{\upsilon }{\zeta } \sup _{r \in [t,+\infty )}\Vert \dot{x}(r)\Vert +k_{j}(1-\exp (-\upsilon ))\Vert u(t)\Vert &{} \end{array} \end{aligned}$$
(59)

Integrate (58), we can easily get \(\lim \limits _{t\rightarrow +\infty } \sup \Vert h_{\upsilon }(t)\Vert =0\), then we can get condition \(\lim \limits _{t\rightarrow \infty }\Vert \ddot{x}(t)\Vert =0\) from \(\Vert \ddot{x}(r)\Vert \le \sup \Vert h_{\upsilon }(t)\Vert \). Thence, we have

$$\begin{aligned} \lim \limits _{t\rightarrow \infty }\nabla _{x}\hat{f}(x(t),\varepsilon (t))=0 \end{aligned}$$
(60)

Therefore, we demonstrate that \(x^*\) is a stable point defined by (19).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, L., He, X. A Smoothing Inertial Neural Network for Sparse Signal Reconstruction with Noise Measurements via \(L_p\)-\(L_1\) minimization. Circuits Syst Signal Process 41, 6295–6313 (2022). https://doi.org/10.1007/s00034-022-02083-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00034-022-02083-7

Keywords

Navigation