Abstract
This paper addresses the reconstruction issue of sparse signal by developing centralized and distributed neurodynamic approaches. An \( l_{1} \)-minimization problem can be employed for reconstructing sparse signal, and for solving this problem, a centralized neurodynamic approach with derivative feedback is designed. Considering the fact that the distributed approaches decompose the large-scale problem into small-scale one without central processing node in centralized ones, it effectively reduces the computational complexity of a single node. According to the distributed consensus and graph theory, the original \( l_{1} \)-minimization problem can be equivalently transformed as a distributed optimization problem. Then, based on our proposed centralized approach, a distributed neurodynamic approach with derivative feedback is further proposed. Through the convex optimization theory and Lyapunov method, we indicate that the optimal solution of \( l_{1} \)-minimization problem is equivalent to the equilibrium point of centralized or distributed approach, and that each neurodynamic approach globally converges to its equilibrium point. Furthermore, by comparing with several state-of-the-art neurodynamic approaches, our proposed approaches demonstrate their effectiveness and superiority by simulation results in reconstructing sparse signals and images.










Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data Availability
All data generated or analyzed during this study are included in this published article.
References
Malioutov D, Cetin M, Willsky AS (2005) A sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans Signal Process 53(8):3010–3022. https://doi.org/10.1109/TSP.2005.850882
Tan J, Ma Y, Baron D (2015) Compressive imaging via approximate message passing with image denoising. IEEE Trans Signal Process 63(8):2085–2092. https://doi.org/10.1109/TSP.2015.2408558
Ragab M, Omer OA, Abdel-Nasser M (2018) Compressive sensing MRI reconstruction using empirical wavelet transform and grey wolf optimizer. Neural Comput Appl 32:2705–2724. https://doi.org/10.1007/s00521-018-3812-7
Wright J, Yang AY, Ganesh A, Sastry SS, Ma Y (2009) Robust face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell 31(2):210–227. https://doi.org/10.1109/TPAMI.2008.79
Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306. https://doi.org/10.1109/TIT.2006.871582
Tropp JA, Gilbert AC (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 53(12):4655–4666. https://doi.org/10.1109/TIT.2007.909108
Byrd RH, Hribar ME, Nocedal J (1999) An interior point algorithm for large-scale nonlinear programming. SIAM J Optim 9:877–900
Becker S, Bobin J, Candès EJ (2011) Nesta: a fast and accurate first-order method for sparse recovery. SIAM J Imaging Sci 4:1–39
Huang B, Zhang H, Gong D, Wang Z (2012) A new result for projection neural networks to solve linear variational inequalities and related optimization problems. Neural Comput Appl 23:357–362. https://doi.org/10.1007/s00521-012-0918-1
Feng J, Qin S, Shi F, Zhao X (2017) A recurrent neural network with finite-time convergence for convex quadratic bilevel programming problems. Neural Comput Appl 30:3399–3408. https://doi.org/10.1007/s00521-017-2926-7
Rozell CJ, Johnson DH, Baraniuk R, Olshausen BA (2008) Sparse coding via thresholding and local competition in neural circuits. Neural Comput 20:2526–2563. https://doi.org/10.1162/neco.2008.03-07-486
Tibshirani R (1996) Regression shrinkage and selection via the lasso. J R Stat Soc Ser B Methodol 58:267–288
Feng R, Leung C-S, Constantinides AG, Zeng W-J (2017) Lagrange programming neural network for nondifferentiable optimization problems in sparse approximation. IEEE Trans Neural Netw Learn Syst 28(10):2395–2407. https://doi.org/10.1109/TNNLS.2016.2575860
Liu Q, Wang J (2016) \(l_{1}\)-minimization algorithms for sparse signal reconstruction based on a projection neural network. IEEE Trans Neural Netw Learn Syst 27(3):698–707. https://doi.org/10.1109/TNNLS.2015.2481006
Xu B, Liu Q, Huang T (2019) A discrete-time projection neural network for sparse signal reconstruction with application to face recognition. IEEE Trans Neural Netw Learn Syst 30(1):151–162. https://doi.org/10.1109/TNNLS.2018.2836933
Bian W, Chen X (2012) Smoothing neural network for constrained non-lipschitz optimization with applications. IEEE Trans Neural Netw Learn Syst 23(3):399–411. https://doi.org/10.1109/TNNLS.2011.2181867
Wang D, Zhang Z (2017) KKT condition-based smoothing recurrent neural network for nonsmooth nonconvex optimization in compressed sensing. Neural Comput Appl 31:2905–2920. https://doi.org/10.1007/s00521-017-3239-6
Zhao Y, He X, Huang T, Huang J, Li P (2020) A smoothing neural network for minimization \( l_{1}-l_{p} \) in sparse signal reconstruction with measurement noises. Neural Netw 122:40–53. https://doi.org/10.1016/j.neunet.2019.10.006
Xie T, Chen G, Liao X (2019) Event-triggered asynchronous distributed optimization algorithm with heterogeneous time-varying step-sizes. Neural Comput Appl 32:6175–6184. https://doi.org/10.1007/s00521-019-04116-w
Liu Q, Wang J (2015) A second-order multi-agent network for bound-constrained distributed optimization. IEEE Trans Autom Control 60(12):3310–3315. https://doi.org/10.1109/TAC.2015.2416927
Yang S, Liu Q, Wang J (2017) A multi-agent system with a proportional-integral protocol for distributed constrained optimization. IEEE Trans Autom Control 62(7):3461–3467. https://doi.org/10.1109/TAC.2016.2610945
Zeng X, Yi P, Hong Y (2017) Distributed continuous-time algorithm for constrained convex optimizations via nonsmooth analysis approach. IEEE Trans Autom Control 62(10):5227–5233. https://doi.org/10.1109/TAC.2016.2628807
He X, Huang T, Yu J, Li C, Zhang Y (2019) A continuous-time algorithm for distributed optimization based on multiagent networks. IEEE Trans Syst Man Cybern Syst 49(12):2700–2709. https://doi.org/10.1109/TSMC.2017.2780194
Che H, Wang J (2019) A collaborative neurodynamic approach to global and combinatorial optimization. Neural Netw 114:15–27. https://doi.org/10.1016/j.neunet.2019.02.002
Li C, Yu X, Yu W, Huang T, Liu Z-W (2016) Distributed event-triggered scheme for economic dispatch in smart grids. IEEE Trans Ind Inf 12(5):1775–1785. https://doi.org/10.1109/TII.2015.2479558
Li C, Yu X, Huang T, He X (2018) Distributed optimal consensus over resource allocation network and its application to dynamical economic dispatch. IEEE Trans Neural Netw Learn Syst 29(6):2407–2418. https://doi.org/10.1109/TNNLS.2017.2691760
Qin S, Zhang YD, Wu Q, Amin MG (2014) Large-scale sparse reconstruction through partitioned compressive sensing. In: 2014 19th international conference on digital signal processing, pp 837–840. https://doi.org/10.1109/ICDSP.2014.6900784
Zhao Y, Liao X, He X, Tang R (2021) Centralized and collective neurodynamic optimization approaches for sparse signal reconstruction via \( l_{1} \)-minimization. IEEE Trans Neural Netw Learn Syst. https://doi.org/10.1109/TNNLS.2021.3085314
Guenin B, Könemann J, Tunçel L (2014) A gentle introduction to optimization. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9781107282094
Candès EJ (2008) The restricted isometry property and its implications for compressed sensing. C R Math 346:589–592
Parikh N, Boyd SP (2014) Proximal algorithms. Found Trends Optim 1:127–239
Acknowledgements
This work is supported in part by National Key R&D Program of China (No. 2018AAA0100101), and in part by National Natural Science Foundation of China (Grant No. 61932006), and in part by Chongqing technology innovation and application development project (Grant No. cstc2020jscx-msxmX0156).
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Conflict of interest
There are no conflicts of interest declared by the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zhou, X., Zhao, Y., Zheng, H. et al. Neurodynamic approaches with derivative feedback for sparse signal reconstruction. Neural Comput & Applic 35, 9501–9515 (2023). https://doi.org/10.1007/s00521-022-08166-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-022-08166-5