Abstract
The sparse signal learning is essentially a sparse solution optimization problem. This technique is especially applicable to the field of signal recovery, e.g. image reconstruction. Such a problem can be solved by the gradient or subgradient descend method. However, conventional method normally needs to introduce extra quadratic term to construct complex objective function, whose solution costs many iteration steps. To address this problem, this paper proposes a novel method called restricted subgradient descend to learn the sparse signals. Our idea is based on the fact that the subgradient of 1-norm function exits at any n-dimensional point, and such a function even can obtain the gradient on the point without zero coordinate components. Thus, to decrease the objective function with regard to 1-norm value, the gradient or subgradient direction can be used to search next update of estimation, which facilitates the learning of the proposed method for high quality sparse solution with quick convergence time. Specifically, two algorithms are proposed, among which the first one uses merely restricted subspace projection scheme and the refined one is based on an improved version of the pivot step of simplex algorithm. It is analyzed that the refined algorithm is able to learn exactly the source sparse signal in finite iteration steps if the subgradient condition is satisfied. This theoretical result is also verified by numerical simulation with good experimental results compared with other state-of-the-art sparse signal learning algorithms.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Han N, Wu J, Liang Y, Fang X, Wong WK, Teng S (2018) Low-rank and sparse embedding for dimensionality reduction. Neural Netw 108:202–216
Zhao Y, He X, Huang T, Huang J (2018) Smoothing inertial projection neural network for minimization \(L_{p-q}\) in sparse signal reconstruction. Neural Netw 99:31–41
Fang X, Xu Y, Li X, Lai Z, Teng S, Fei L (2017) Orthogonal self-guided similarity preserving projection for classification and clustering. Neural Netw 88:1–8
Kafashana M, Nandi A, Ching S (2016) Relating observability and compressed sensing of time-varying signals in recurrent linear networks. Neural Netw 83:11–20
Vidya L, Vivekanand V, Shyamkumar U, Deepak M (2015) RBF-network based sparse signal recovery algorithm for compressed sensing reconstruction. Neural Netw 63:66–78
Yang J, Zhang L, Xu Y, Yang J-Y (2012) Beyond sparsity: the role of \(l_1\)-optimizer in pattern classification. Pattern Recogn 45:1104–1118
Pham D-S (2015) On group-wise \(l_p\) regularization: theory and efficient algorithms. Pattern Recogn 48:3728–3738
Abiantun R, Xu FJ, Prabhu U, Savvides M (2019) SSR2: sparse signal recovery for single-image super-resolution on faces with extreme low resolutions. Pattern Recogn 90:308–324
Hu X-L, Wen J, Lai Z, Wong WK, Shen L (2019) Binary sparse signal recovery algorithms based on logic observation. Pattern Recogn 90:147–160
Wang M, Yu J, Ning Z-H, Xiao C-B (2021) Compressed sensing using generative models based on fisher information. Int J Mach Learn Cybern 12:2747–2759
Li G, Yan Z (2019) Reconstruction of sparse signals via neurodynamic optimization. International Int J Mach Learn Cybern 10:15–26
Chen SS, Donoho DL, Saunders MA (1998) Atomic decomposition by basis pursuit. SIAM J Sci Comput 20:33–61
Donoho DL, Huo X (2001) Uncertainty principles and ideal atomic decomposition. IEEE Trans Inf Theory 47:2845–2862
Candes EJ, Tao T (2005) Decoding by linear programming. IEEE Trans Inf Theory 51:4203–4215
Candes EJ, Tao T (2006) Near-optimal signal recovery from random projections: universal encoding strategies. IEEE Trans Inf Theory 52:5406–5425
Candes EJ, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 52:489–509
Tropp J, Gilbert A (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 53(12):4655–4666
Ji S, Xue Y, Carin L (2008) Bayesian compressive sensing. IEEE Trans Signal Process 56(6):2346–2355
Chambolle A, DeVore RA, Lee NY, Lucier BJ (1998) Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans Image Process 7:319–335
Figueiredo MAT, Nowak RD (2003) An EM algorithm for wavelet-based image restoration. IEEE Trans Image Process 12:906–916
Daubechies I, Defrise M, Mol CD (2004) An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun Pure Appl Math 57:1413–1457
Hale E, Yin W, Zhang Y (2007) A fixed-point continuation method for \(l_1\)-regularized minimization with applications to compressed sensing, CAAM Technical report TR07-07. Rice University, Houston, TX
Vonesch C, Unser M (2007) Fast iterative thresholding algorithm for wavelet-regularized deconvolution. In: Proceedings of the SPIE optics and photonics 2007 conference on mathematical methods: wavelet XII, Vol. 6701, San Diego, CA, pp 1–5
Wright SJ, Nowak RD, Figueiredo MAT (2008) Sparse reconstruction by separable approximation. In: Proceedings of the IEEE international conference on acoustics, speech and signal processing (ICASSP 2008), pp 3373–3376
Beck A, Teboulle M (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J Image Sci 2(1):183–202
Nesterov YE (1983) A method for solving the convex programming problem with convergence rate O(1/k 2). Dokl Akad Nauk SSSR 269:543–547 ((in Russian))
Cai J-F, Osher S, Shen Z (2009) Linearized Bregman iterations for compressed sensing, 2008. Math Comp 78(267):1515–1536
Osher S, Mao Y, Dong B, Yin W (2008) Fast linearized bregman iteration for compressed sensing and sparse denoising, UCLA CAM Report (08-37)
Yin W, Osher S, Goldfarb D, Darbon J (2008) Bregman iterative algorithms for \(l_1\)- minimization with applications to compressed sensing. SIAM J Imaging Sci 1:143–168
Liu H, Peng J (2018) Sparse signal recovery via alternating projection method. Signal Process 143:161–170
Liu H, Peng J, Lin Z (2020) A theoretical result of sparse signal recovery via alternating projection method. Inf Sci 506:51–57
Huang S, Tran TD (2019) Sparse signal recovery via generalized entropy functions minimization. IEEE Trans Signal Process 67(5):1322–1337
Wang S, Rahnavard N (2018) A framework for clustered and skewed sparse signal recovery. IEEE Trans Signal Process 66(15):3972–3986
Ghayem F, Sadeghi M, Babaie-Zadeh M, Chatterjee S, Skoglund M, Jutten C (2018) Sparse signal recovery using iterative proximal projection. IEEE Trans Signal Process 66(4):879–894
Yang C, Shen X, Ma H, Gu Y, So HC (2018) Sparse recovery conditions and performance bounds for \(\ell _p\)-minimization. IEEE Trans Signal Process 66(19):5014–5028
Muthukrishnan S (2005) Data streams: algorithms and applications. Now Publishers, Boston, MA
Zhang H, Yin W, Cheng L (2015) Necessary and sufficient conditions of solution uniqueness in 1-norm minimization. J Optim Theory Appl 164:109–122
Hu X-L, Wen J, Wong WK, Tong L, Cui J (2018) On uniqueness of sparse signal recovery. Signal Process 150:66–74
Cohen A, Dahmen W, DeVore R (2009) Compressed sensing and best k-term approximation. J Am Math Soc 22:211–231
Kutyniok G (2012) Compressed sensing: theory and application. http://arxiv.org/abs/1203.3815
Candes EJ (2008) The restricted isometry property and its implication for compressed sensing. C R Acad Sci I(346):589–592
Eldar YC, Kutyniok G (2012) Compressed sensing: theory and applications. Cambridge University Press, Cambridge
Li B, Shen Y, Rajan S, Kirubarajan T (2015) Theoretical results for sparse signal recovery with noises using generalized OMP algorithm. Signal Process 117:270–278
Donoho DL, Elad M (2003) Optimally sparse representation in general (nonorthogonal) dictionaries via \(l_1\) minimization. Proc Natl Acad Sci USA 100:2197–2202
Zhao J, Song R, Zhao J, Zhu W-P (2015) New conditions for uniformly recovering sparse signals via orthogonal matching pursuit. Signal Process 106:106–113
Rockafellar RT (1970) Convex analysis. Princeton University Press, Princeton
Bertsekas DP (1999) Nonlinear programming, 2d edition, Athena Scientific
Horn RA, Johnson CR (1985) Matrix analysis. Cambridge University Press, Cambridge
Golub G, Van Loan CF (1996) Matrix computations. The Johns Hopkins University Press, Baltimore
Acknowledgements
This work was supported in part by the Natural Science Foundation of China under Grant 61703283, in part by the Laboratory for Artificial Intelligence in Design (Project Code: RP3-3), in part by the Innovation and Technology Fund, Hong Kong SAR, in part by the Guangdong Basic and Applied Basic Research Foundation 2021A1515011318, 2017A030310067, in part by the Shenzhen Municipal Science and Technology Innovation Council under the Grant JCYJ20190808113411274, in part by the Shenzhen Visual Object Detection and Recognition Key Laboratory Open Project HITSZ20220287, in part by the Overseas High-Caliber Professional in Shenzhen under Project 20190629729C, in part by the High-Level Professional in Shenzhen under Project 20190716892H, in part by the Research Foundation for Postdoctor Worked in Shenzhen under Project 707-0001300148 and 707-0001310414, in part by the National Engineering Laboratory for Big Data System Computing Technology, in part by the Guangdong Laboratory of Artificial-Intelligence and Cyber-Economics (SZ), in part by the Shenzhen Institute of Artificial Intelligence and Robotics for Society, in part by the Scientific Research Foundation of Shenzhen University under Project 2019049, Project 860-000002110328 and Project 827-000526.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Wen, J., Wong, W.K., Hu, XL. et al. Restricted subgradient descend method for sparse signal learning. Int. J. Mach. Learn. & Cyber. 13, 2691–2709 (2022). https://doi.org/10.1007/s13042-022-01551-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13042-022-01551-5