Abstract
Iterative algorithms based on thresholding, feedback, and null space tuning (NST+HT+FB) for sparse signal recovery are exceedingly effective and efficient, particularly for large-scale problems. The core algorithm is shown to converge in finitely many steps under a (preconditioned) restricted isometry condition. We derive in this article the number of iterations to guarantee the convergence of the NST+HT+FB algorithm. Moreover, an accelerated class of adaptive feedback scheme of the iterative algorithm, termed NST+HT+f -FB, is proposed and analyzed. The scheme NST+HT+f -FB has a variable/adaptive index selection and different feedback principles at each iteration defined by a function f(k). It is even more effective, both from its ability to recover sparse signals with a larger number of non-zeros and from its rate of convergence. The convergence of the accelerated scheme is established. The finite number of iterations for guaranteed convergence by the NST+HT+f -FB scheme is also obtained. Furthermore, it is possible to accelerate the rate of convergence and improve the condition of convergence by selecting an appropriate size of the thresholding index set Tk per iteration. The theoretical findings are validated through extensive numerical experiments. It has been shown that the proposed algorithm has a clearly advantageous balance of efficiency, adaptivity, and accuracy compared with other state-of-the-art greedy algorithms. Detailed comparisons are provided.




Similar content being viewed by others
References
Donoho, D. L.: Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006)
Candés, E. J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory. 51(12)), 4203–4215 (2005)
Candés, E. J., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math. 59 (8), 1207–1223 (2006)
Nyquist, H.: Certain topics in telegraph transmission theory. Trans. A.I.E.E. 47(2), 617–644 (1928)
Duarte, M. F., Davenport, M. A., Takhar, D., Laska, J. N., Sun, T., Kelly, K. F., Baraniuk, R. G.: Single-Pixel Imaging via compressive sampling. IEEE Signal Process. Mag. 25(2), 83–91 (2008)
Yang, J., Wright, J., Huang, T. S., Ma, Y.: Image super-resolution via sparse representation. IEEE Trans. Image Process. 19(11), 2861–2873 (2010)
Malioutov, D., Cetin, M., Willsky, A. S.: A sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans. Signal Process. 53(8), 3010–3022 (2005)
Donatelli, M., Huckle, T., Mazza, M., Sesana, D.: Image deblurring by sparsity constraint on the Fourier coefficients. Numer. Algorithm. 72 (2), 341–361 (2016)
Duarte, M. F., Baraniuk, R. G.: Spectral compressive sensing. Appl. Comput. Harmon. Anal. 35(1), 111–129 (2013)
Candés, E. J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM. 58(3), 1–37 (2011)
Recht, B., Fazel, M., Parrilo, P.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52 (3), 471–501 (2010)
Gandy, S., Recht, B., Yamada, I.: Tensor completion and low-n-rank tensor recovery via convex optimization. Inv. Probl. 27(2), 19 (2011)
Jia, Z. G., Ng, M. K., Song, G. J.: Lanczos method for large-scale quaternion singular value decomposition. Numer. Algorithm. 82(2), 699–717 (2019)
Natarajan, B. K.: Sparse approximate solutions to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)
Chen, S. S., Donoho, D. L., Saunders, M. A.: Atomic decomposition by basis pursuit. SIAM Rev. 43(1), 129–159 (2001)
Candés, E. J.: The restricted isometry property and its implications for compressed sensing. C. R. Math. Acad. Sci. Paris, Ser. I 346, 589–592 (2008)
Foucart, S.: A note on guaranteed sparse recovery via ℓ1 minimization. Appl. Comput. Harmon. Anal. 29(1), 97–103 (2010)
Davies, M., Gribonval, R.: Restricted isometry constants where ℓp sparse recovery can fail for 0 < p ≤ 1. IEEE Trans. Inf. Theory. 55(5), 2203–2214
Foucart, S.: A note on guaranteed sparse recovery via ℓ1 minimization. Appl. Comput. Harmon. Anal. 29(1), 97–103 (2010)
Cai, T. T., Zhang, A.: Sharp RIP bound for sparse signal and low-rank matrix recovery. Appl. Comput. Harmon. Anal. 35(1), 74–93 (2010)
Zhang, R., Li, S.: A Proof of Conjecture on Restricted Isometry Property Constants δtk \((0< t<\frac {4}{3})\). IEEE Trans. Inf. Theory. 64(3), 1699–1705 (2017)
Chartrand, R.: Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 14(10), 707–710 (2007)
Sun, Q. Y.: Recovery of sparsest signals via ℓq-minimization. Appl. Comput. Harmon. Anal. 32(3), 329–341 (2012)
Wu, R., Chen, D. R.: The Improved Bounds of Restricted Isometry Constant for Recovery via ℓp-Minimization. IEEE Trans. Inf. Theory. 59(9), 6142–6147 (2013)
Gao, Y., Peng, J. G., Yue, S. G.: Stability and robustness of the ℓ2/ℓq-minimization for block sparse recovery. Signal Process. 137, 287–297 (2017)
Zhang, R., Li, S.: Optimal RIP bounds for sparse signals recovery via ℓp minimization. Appl. Comput. Harmon. Anal. 47(3), 566–584 (2019)
Zheng, L., Maleki, A., Weng, H. L., Wang, X. D., Long, T.: Does ℓp,-Minimization Outperform ℓ1-Minimization?. IEEE Trans. Inf. Theory. 63(11), 6896–6935 (2017)
Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008)
Candés, E. J., Wakin, M., Boyd, S.: Enhancing sparsity by reweighted ℓ1 minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2008)
Asif, M., Romberg, J.: Fast and accurate algorithms for re-weighted ℓ1 norm minimization. IEEE Trans. Signal Process. 61(23), 5905–5916 (2013)
Zhao, Y.B., Li, D.: Reweighted ℓ1 minimization for sparse solution to underdetermined linear systems. SIAM J. Optim. 22(3), 1065–1088 (2012)
Daubechies, I., DeVore, R., Fornasier, M., Gntrk, C. S.: Iteratively reweighted least squares minimization for sparse recovery. Commun. Pure Appl. Math. 63(1), 1–38 (2010)
Lai, C. K., Li, S. D., Mondo, D.: Spark-level sparsity and the ℓ1 tail minimization. Appl. Comput Harmon. Anal. 45(1), 206–215 (2018)
Tropp, J. A.: Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inf. Theory. 50(10), 2231–2242 (2004)
Tropp, J. A., Gilbert, A. C.: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory. 53(12), 4655–4666 (2007)
Needell, D., Vershynin, R.: Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE J. Sel Topics Signal Process. 4(2), 310–316 (2010)
Donoho, D.L., Tsaig, Y., Drori, I., Starck, J.: Sparse solutions of underdetermined linear equations by stagewise orthogonal matching pursuit, [Online]. Available: http://www-stat.stanford.edu/~donoho/Reports/2006/StOMP-20060403.pdf (2006)
Dai, W., Milenkovic, O.: Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory. 55(5), 2230–2249 (2009)
Needell, D., Tropp, J. A.: Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009)
Wang, J., Kwon, S., Shim, B.: Generalized orthogonal matching pursuit. IEEE Trans. Signal Process. 60(12), 6202–6216 (2012)
Blumensath, T., Davies, M. E.: Iterative thresholding for sparse approximations. J. Fourier Anal. Appl. 14(5-6), 629–654 (2008)
Blumensath, T., Davies, M. E.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009)
Blumensath, T., Davies, M. E.: Normalized iterative hard thresholding: Guaranteed stability and performance. IEEE J. Sel. Top. Signal Process. 4(2), 298–309 (2010)
Blumensath, T.: Accelerated iterative hard thresholding. Signal Process. 92(3), 752–756 (2012)
Blanchard, J. D., Tanner, J., Wei, K.: CGIHT: Conjugate gradient iterative hard thresholding for compressed sensing and matrix completion. Inf. Inference 4(4), 289–327 (2015)
Blanchard, J. D., Tanner, J., Wei, K.: Conjugate gradient iterative hard thresholding: Observed noise stability for compressed sensing. IEEE Trans. Signal Process. 63(2), 528–537 (2014)
Foucart, S.: Hard thresholding pursuit: an algorithm for compressive sensing. SIAM J. Numer. Anal. 49(6), 2543–2563 (2011)
Bouchot, L., Foucart, S., Hitczenko, P.: Hard thresholding pursuit algorithms: number of iterations. Appl. Comput. Harmon. Anal. 41(2), 412–435 (2016)
Bouchot, J. -L.: A generalized class of hard thresholding algorithms for sparse signal recovery. In: Approximation Theory XIV: San Antonio 2013, pp. 45–63. Springer (2014)
Li, S. D., Liu, Y. L., Mi, T. B.: Fast thresholding algorithms with feedbacks for sparse signal recovery. Appl. Comput. Harmon. Anal. 37(1), 69–88 (2014)
Funding
This work was supported by the Foundation for Distinguished Young Talents of Guangdong under grant 2021KQNCX075, GuangDong Basic and Applied Basic Research Foundation under grant 2021A1515110530, National Natural Science Foundation of China under grants 61972265, 11871348 and 61373087, the Natural Science Foundation of Guangdong Province of China under grant 2020B1515310008, the Educational Commission of Guangdong Province of China undergrant 2019KZDZX1007, and the Guangdong Key Laboratory of Intelligent Information Processing, China, and the NSF of USA (DMS-1313490, DMS-1615288).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix:
Appendix:
1.1 Preliminary results
Definition 7.1
[2]. For each integer s = 1, 2,..., the matrix A is said to satisfy the RIP of order s with constant δs if
holds for all s-sparse vectors x. Equivalently, it is given by
Definition 7.2
[50]. For each integer s = 1, 2,..., the matrix A is said to satisfy the P-RIP of order s with constant γs if
holds for all s-sparse vectors x. In fact, the preconditioned restricted isometry constant γs characterizes the restricted isometry property of the preconditioned matrix \((AA^{\ast })^{-\frac {1}{2}}A\). Since
γs is actually the smallest number such that, for all s-sparse vectors x,
It indicates \(\gamma _{s}(A)=\delta _{s}\left (\left (AA^{\ast }\right )^{-\frac {1}{2}}A\right )\). Equivalently, it is given by
Lemma 7.2
Let δt be the RIP constant of A. For \(u,v\in \mathbb {C}^{N}\), if |supp(u) ∪ supp(v)|≤ t, then \(|\langle u,\left (I-A^{\ast }A\right )v\rangle |\leq \delta _{t}\|u\|_{2}\|v\|_{2}\). Suppose \(|T^{\prime } \cup supp (v)|\leq t\), then \(\|\left (\left (I-A^{\ast }A\right )v\right )_{T^{\prime }}\|_{2}\leq \delta _{t}\|v\|_{2}\).
Proof
Setting T = supp(v) ∪ supp(u), |T|≤ t, one has
The first inequality is due to the Cauchy-Schwarz inequality and the second inequality is due to the submultiplicativity of matrix norms, while the last step is based on Definition 7.1. Since
one can obtain that \(\|\left (\left (I-A^{\ast }A\right )v\right )_{T^{\prime }}\|_{2}\leq \delta _{t}\|v\|_{2}\). □
Remark 7.1
Let γt be the P-RIP constant of A, i.e., \(\gamma _{t}(A)=\delta _{t}\left (\left (AA^{\ast }\right )^{-\frac {1}{2}}A\right )\). For \(u,v\in \mathbb {C}^{N}\), if |supp(u) ∪ supp(v)|≤ t, then \(|\langle u,\left (I-A^{\ast }\left (AA^{\ast }\right )^{-1}A\right )v\rangle |\leq \gamma _{t}\|u\|_{2}\|v\|_{2}\). Suppose \(|T^{\prime } \cup supp (v)|\leq t\), then \(\|\left (\left (I-A^{\ast }\left (AA^{\ast }\right )^{-1}A\right )v\right )_{T^{\prime }}\|_{2}\leq \gamma _{t}\|v\|_{2}\).
Lemma 7.2
For \(e\in \mathbb {C}^{M}\), \(\|\left (A^{\ast }e\right )_{T}\|_{2}\leq \sqrt {1+\delta _{t}}\|e\|_{2}\), when |T|≤ t.
Proof
Hence, for all \(e\in \mathbb {C}^{M}\), we have \(\|\left (A^{\ast }e\right )_{T}\|_{2}\leq \sqrt {1+\delta _{t}}\|e\|_{2}\). □
Remark 7.2
If \(\delta _{t}\left (\left (AA^{\ast }\right )^{-1}A\right )=\theta _{t}\), then for \(e\in \mathbb {C}^{M}\), \(\|\left (A^{\ast }\left (AA^{\ast }\right )^{-1}e\right )_{T}\|_{2}\leq \sqrt {1+\theta _{t}}\|e\|_{2}\), when |T|≤ t.
1.2 Proof of Lemma 3.1.
Proof
We first have that
\(\|[\mu ^{\prime }+A^{\ast }(AA^{\ast })^{-1}(y-A\mu ^{\prime })]_{T}\|_{2}\geq \|[\mu ^{\prime }+A^{\ast }(AA^{\ast })^{-1}(y-A\mu ^{\prime })]_{S}\|_{2}.\)
Eliminating the common terms over \(T\bigcap S\), one has
\(\|[\mu ^{\prime }+A^{\ast }(AA^{\ast })^{-1}(y-A\mu ^{\prime })]_{T\setminus S}\|_{2}\geq \|[\mu ^{\prime }+A^{\ast }(AA^{\ast })^{-1}(y-A\mu ^{\prime })]_{S\setminus T}\|_{2}.\)
For the left hand,
The right hand satisfies
Consequently,
The last step is due to Remark 7.1 and Remark 7.2. □
1.3 Proof of Lemma 3.2.
Proof
For any \(z\in \mathbb {C}^{N}\) supported on T,
The last step is due to the feasibility of \(x^{\prime }\), i.e., \(y=Ax^{\prime }\). The inner product can also be written as
\(\langle A\mu ^{\prime }-y,Az\rangle =\langle (A\mu ^{\prime }-Ax-e),Az\rangle =0\).
Therefore,
\(\langle (\mu ^{\prime }-x),A^{\ast }Az\rangle =\langle e,Az\rangle ,~~\forall z\in \mathbb {C}^{N}\) supported on T.
Since \((\mu ^{\prime }-x)_{T}\) is supported on T, one has
\(\langle (\mu ^{\prime }-x),A^{\ast }A(\mu ^{\prime }-x)_{T}\rangle =\langle e,A(\mu ^{\prime }-x)_{T}\rangle .\)
Consequently,
Using Lemma 7.2, Cauchy-Schwarz inequality and Definition 7.1 can obtain the last inequality. Therefore, we have
\(\|(x-\mu ^{\prime })_{T}\|_{2}\leq \delta _{s+t}\|x-\mu ^{\prime }\|_{2}+\sqrt {1+\delta _{t}}\|e\|_{2}\).
It then follows that
In other words,
\(\left (\sqrt {1-\delta _{s+t}^{2}}\|(x-\mu ^{\prime })\|_{2}-\frac {\delta _{s+t}\sqrt {1+\delta _{t}}}{\sqrt {1-\delta _{s+t}^{2}}}\|e\|_{2}\right )^{2} \leq \frac {1+\delta _{t}}{1-\delta _{s+t}^{2}}\|e\|_{2}^{2}+\|x_{T^{c}}\|_{2}^{2}.\)
It means that
□
1.4 Proof of Lemma 4.1.
Proof
As defined, \(\widehat {x}\in \mathbb {C}^{N}_{+}\) is the nonincreasing rearrangement of a vector \(x=(x_{1}, x_{2},\cdots ,x_{N})\in \mathbb {C}^{N}\), i.e., \(\widehat {x}_{1}\geq \widehat {x}_{2}\geq {\ldots } \widehat {x}_{N}\geq 0\), and there exists a permutation π of {1,…,N} such that \(\widehat {x}_{i}=|x_{\pi (i)}|\) for all i ∈{1,…,N}. For NST+HT+FB, the hypothesis is \(\pi (\{1,\ldots ,p\})\subseteq T_{k}\) and the goal is to prove that \(\pi (\{1,\ldots ,p+q\})\subseteq T_{k+\ell }\). That is to say the \(|\left (\mu ^{\ell +k-1}+A^{\ast }\left (AA^{\ast }\right )^{-1}\left (y-A\mu ^{\ell +k-1}\right )\right )_{\pi (j)}|\) for j ∈{1,…p + q} are among the s largest entries of \(|\left (\mu ^{\ell +k-1}+A^{\ast }\left (AA^{\ast }\right )^{-1}\left (y-A\mu ^{\ell +k-1}\right )\right )_{i}|\) for i ∈{1,…,N}. Since supp(x) = S and |S| = s, it is then enough to prove that
For j ∈{1,…p + q} and i ∈ Sc, in view of
For the proof of (1), one only needs to prove next, for all j ∈{1,…,p + q} and i ∈ Sc,
The right-hand side can be bounded by
where Remark 3.1 was used ℓ − 1 times in the last step. Furthermore, considering Lemma 3.2 and the assumption that \(\pi (\{1,\ldots ,p\})\subseteq T_{k}\), we have
Therefore, if the condition (4.1) holds, then (2) is proved. □
Rights and permissions
About this article
Cite this article
Han, N., Lu, J. & Li, S. The finite steps of convergence of the fast thresholding algorithms with f-feedbacks in compressed sensing. Numer Algor 90, 1197–1223 (2022). https://doi.org/10.1007/s11075-021-01227-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-021-01227-1