Abstract
It is significant to solve \(l_1\) minimization problems efficiently and reliably in compressed sensing (CS) since the \(l_1\) minimization is essential for the recovery of sparse signals. In view of this, a neurodynamic optimization approach is proposed for solving the \(l_1\)-minimization problems for reconstruction of sparse signals based on a projection neural network (PNN). The proposed neurodynamic optimization approach differs from most \(l_1\)-solvers in that it operates in continuous time rather than being specified by discrete iterations; i.e., it evolves according to deterministic neurodynamics. The proposed PNN is designed based on subgradient projection methods. The neural network has a simple structure, giving it a potential to be implemented as a large-scale analog circuit. It is proved that under appropriate conditions on the measurement matrix, every neuronal state of the proposed neural network is convergent to the optimal solution of the \(l_1\)-minimization problem under study. Simulation results are provided to substantiate the effectiveness of the proposed approach.
Similar content being viewed by others
References
Candes EJ, Romberg J (2007) Sparsity and incoherence in compressive sampling. Inverse Probl 23(3):969–985
Baraniuk R (2008) Compressive sensing. IEEE Signal Process Mag 25:118–120
Candes EJ, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. Trans IEEE Inf Theory 52(2):489–509
Lustig M, Donoho D, Pauly J (2007) The application of compressed sensing for rapid MR imaging. Magn Reson Med 58(6):1182
Trzasko J, Manduca A, Trans IEEE (2009) Highly undersampled magnetic resonance image reconstruction via homotopic minimization. Med Imag 28(1):106–121
Duarte M, Davenport M, Tahkar D, Laska J, Ting S, Kelly K, Baraniuk R (2008) Single-pixel imaging via compressive sampling. IEEE Signal Process Mag 25(2):83–91
Natarajan BK (1995) Sparse approximate solutions to liline systems. SIAM J Comput 24:227–234
Candes EJ, Acad CR (2008) The restricted isometry property and its applications for compressed sensing. C R Acad Sci Paris Ser I 346(9–10):589–592
Chen S, Donoho DL, Saunders MA (2001) Atomic decomposition by basis pursuit. SIAM Rev 43(1):129–159
Kim SJ, Koh K, Lustig M, Boyd S, Corinevsky D (2007) An interior-point method for large-scale l1 regularized least squares. IEEE J Select Topics Signal Process 1(4):606–617
Donoho DL, Tsaig Y (2006) Fast solution of l1-norm minimization problems when the solution may be sparse. Department of Statistics, Stanford University, USA, Tech. rep
Figueiredo M, Nowak R, Wright S, Sel IEEEJ (2007) Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. Topics Signal Process 1(4):586–598
Candes EJ (2008) The restricted isometry property and its implications for compressed sensing. Comptes Rendus Acad Sci Ser I 346(9–10):589–592
Becker S, Bobin J, Candes EJ (2011) Nesta: a fast and accurate first-order method for sparse recovery. SIAM J Imaging Sci 4(1):1–39
Yin W, Osher S, Goldfarb D, Darbon J (2008) Bregman iterative algorithms for l1-minimization with applications to compressed sensing. SIAM J Imaging Sci 1(1):143–168
Berg ED, Friedlander MP (2008) Probing the pareto frontier for basis pursuit solutions. SIAM J Sci Comput 31(2):890–912
Daubechies I, Defrise M, Mol CD (2004) An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun Pure Appl Math 57(11):1413–1457
Blumensath T, Davies ME (2008) Iterative thresholding for sparse approximations. J Fourier Anal Appl 14(5–6):629–654
Donoho D, Tsaig Y (2008) Fast solution for l1-norm minimization problems when the solution may be sparse. Trans IEEE Inf Theory 54(11):4789–4812
Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. In: Proc Nat Acad Sci
Cichocki A, Unbehauen R (1993) Neural networks for optimization and signal processing. Wiley, New York
Twigg C, Hasler P (2009) Configurable analog signal processing. Digital Signal Process 19(6):904–922
Rozell CJ, Johnson DH, Baraniuk RG, Olshausen BA (2008) Sparse coding via thresholding and local competition in neural circuits. Neural Comput 20(10):2526–2563
Guo ZS, Wang J (2010) A neurodynamical optimization approach to constrained sparsity maximization based on alternative objective functions. In: Proceedings of International Joint Conference on Neural Networks
Liu Q, Wang J (2009) A one-layer recurrent neural network for nonsmooth convex optimization subject to linear equality constraints. In: Proc Int Conf Neural Inf, Process
Guo Z, Liu Q, Wang J (2011) A one-layer recurrent neural network for pseudoconvex optimization with linear equality constraints. IEEE Trans Neural Netw 22(12):1892–1900
Filippov A (1988) Differential equations with discontinuous right-hand side. Kluwer Academic, Dordrecht
Aubin J, Cellina A (1984) Differential inclusions. Springer-Verlag, Berlin
Clarke F (1969) Optimization and non-smooth analysis. Wiley, New York
Pardalos P (2008) Nonconvex optimization and its application. Berlin Heidelberg
Cambini A, Martein L (2009) Generalized convexity and optimization: theory and applications. Springer-Verlag, Berlin Heidelberg
Author information
Authors and Affiliations
Corresponding author
Additional information
The work described in the paper was supported by the National Natural Science Foundation of China under Grant 61473325.
Rights and permissions
About this article
Cite this article
Li, G., Yan, Z. Reconstruction of sparse signals via neurodynamic optimization. Int. J. Mach. Learn. & Cyber. 10, 15–26 (2019). https://doi.org/10.1007/s13042-017-0694-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13042-017-0694-4