Skip to main content
Log in

Reconstruction of sparse signals via neurodynamic optimization

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

It is significant to solve \(l_1\) minimization problems efficiently and reliably in compressed sensing (CS) since the \(l_1\) minimization is essential for the recovery of sparse signals. In view of this, a neurodynamic optimization approach is proposed for solving the \(l_1\)-minimization problems for reconstruction of sparse signals based on a projection neural network (PNN). The proposed neurodynamic optimization approach differs from most \(l_1\)-solvers in that it operates in continuous time rather than being specified by discrete iterations; i.e., it evolves according to deterministic neurodynamics. The proposed PNN is designed based on subgradient projection methods. The neural network has a simple structure, giving it a potential to be implemented as a large-scale analog circuit. It is proved that under appropriate conditions on the measurement matrix, every neuronal state of the proposed neural network is convergent to the optimal solution of the \(l_1\)-minimization problem under study. Simulation results are provided to substantiate the effectiveness of the proposed approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Candes EJ, Romberg J (2007) Sparsity and incoherence in compressive sampling. Inverse Probl 23(3):969–985

    Article  MathSciNet  MATH  Google Scholar 

  2. Baraniuk R (2008) Compressive sensing. IEEE Signal Process Mag 25:118–120

    Article  MathSciNet  Google Scholar 

  3. Candes EJ, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. Trans IEEE Inf Theory 52(2):489–509

    Article  MathSciNet  MATH  Google Scholar 

  4. Lustig M, Donoho D, Pauly J (2007) The application of compressed sensing for rapid MR imaging. Magn Reson Med 58(6):1182

    Article  Google Scholar 

  5. Trzasko J, Manduca A, Trans IEEE (2009) Highly undersampled magnetic resonance image reconstruction via homotopic minimization. Med Imag 28(1):106–121

    Article  Google Scholar 

  6. Duarte M, Davenport M, Tahkar D, Laska J, Ting S, Kelly K, Baraniuk R (2008) Single-pixel imaging via compressive sampling. IEEE Signal Process Mag 25(2):83–91

    Article  Google Scholar 

  7. Natarajan BK (1995) Sparse approximate solutions to liline systems. SIAM J Comput 24:227–234

    Article  MathSciNet  MATH  Google Scholar 

  8. Candes EJ, Acad CR (2008) The restricted isometry property and its applications for compressed sensing. C R Acad Sci Paris Ser I 346(9–10):589–592

  9. Chen S, Donoho DL, Saunders MA (2001) Atomic decomposition by basis pursuit. SIAM Rev 43(1):129–159

    Article  MathSciNet  MATH  Google Scholar 

  10. Kim SJ, Koh K, Lustig M, Boyd S, Corinevsky D (2007) An interior-point method for large-scale l1 regularized least squares. IEEE J Select Topics Signal Process 1(4):606–617

    Article  Google Scholar 

  11. Donoho DL, Tsaig Y (2006) Fast solution of l1-norm minimization problems when the solution may be sparse. Department of Statistics, Stanford University, USA, Tech. rep

  12. Figueiredo M, Nowak R, Wright S, Sel IEEEJ (2007) Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. Topics Signal Process 1(4):586–598

    Article  Google Scholar 

  13. Candes EJ (2008) The restricted isometry property and its implications for compressed sensing. Comptes Rendus Acad Sci Ser I 346(9–10):589–592

  14. Becker S, Bobin J, Candes EJ (2011) Nesta: a fast and accurate first-order method for sparse recovery. SIAM J Imaging Sci 4(1):1–39

    Article  MathSciNet  MATH  Google Scholar 

  15. Yin W, Osher S, Goldfarb D, Darbon J (2008) Bregman iterative algorithms for l1-minimization with applications to compressed sensing. SIAM J Imaging Sci 1(1):143–168

    Article  MathSciNet  MATH  Google Scholar 

  16. Berg ED, Friedlander MP (2008) Probing the pareto frontier for basis pursuit solutions. SIAM J Sci Comput 31(2):890–912

    Article  MathSciNet  MATH  Google Scholar 

  17. Daubechies I, Defrise M, Mol CD (2004) An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun Pure Appl Math 57(11):1413–1457

    Article  MathSciNet  MATH  Google Scholar 

  18. Blumensath T, Davies ME (2008) Iterative thresholding for sparse approximations. J Fourier Anal Appl 14(5–6):629–654

    Article  MathSciNet  MATH  Google Scholar 

  19. Donoho D, Tsaig Y (2008) Fast solution for l1-norm minimization problems when the solution may be sparse. Trans IEEE Inf Theory 54(11):4789–4812

    Article  MATH  Google Scholar 

  20. Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. In: Proc Nat Acad Sci

  21. Cichocki A, Unbehauen R (1993) Neural networks for optimization and signal processing. Wiley, New York

    MATH  Google Scholar 

  22. Twigg C, Hasler P (2009) Configurable analog signal processing. Digital Signal Process 19(6):904–922

    Article  Google Scholar 

  23. Rozell CJ, Johnson DH, Baraniuk RG, Olshausen BA (2008) Sparse coding via thresholding and local competition in neural circuits. Neural Comput 20(10):2526–2563

    Article  MathSciNet  Google Scholar 

  24. Guo ZS, Wang J (2010) A neurodynamical optimization approach to constrained sparsity maximization based on alternative objective functions. In: Proceedings of International Joint Conference on Neural Networks

  25. Liu Q, Wang J (2009) A one-layer recurrent neural network for nonsmooth convex optimization subject to linear equality constraints. In: Proc Int Conf Neural Inf, Process

  26. Guo Z, Liu Q, Wang J (2011) A one-layer recurrent neural network for pseudoconvex optimization with linear equality constraints. IEEE Trans Neural Netw 22(12):1892–1900

    Article  Google Scholar 

  27. Filippov A (1988) Differential equations with discontinuous right-hand side. Kluwer Academic, Dordrecht

    Book  MATH  Google Scholar 

  28. Aubin J, Cellina A (1984) Differential inclusions. Springer-Verlag, Berlin

    Book  MATH  Google Scholar 

  29. Clarke F (1969) Optimization and non-smooth analysis. Wiley, New York

    Google Scholar 

  30. Pardalos P (2008) Nonconvex optimization and its application. Berlin Heidelberg

  31. Cambini A, Martein L (2009) Generalized convexity and optimization: theory and applications. Springer-Verlag, Berlin Heidelberg

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zheng Yan.

Additional information

The work described in the paper was supported by the National Natural Science Foundation of China under Grant 61473325.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, G., Yan, Z. Reconstruction of sparse signals via neurodynamic optimization. Int. J. Mach. Learn. & Cyber. 10, 15–26 (2019). https://doi.org/10.1007/s13042-017-0694-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-017-0694-4

Keywords

Navigation