Abstract
Recovering a non-negative sparse signal from an underdetermined linear system remains a challenging problem in signal processing. Despite the development of various approaches, such as non-negative least squares, as well as variants of greedy algorithms and iterative thresholding methods, their recovery performance and efficiency often fall short of practical expectations. Aiming to address this limitation, this paper first devises a momentum-boosted adaptive thresholding algorithm for non-negative sparse signal recovery. Then, we establish two sufficient conditions of stable recovery for the proposed algorithm by using the restricted isometry property and mutual coherence. Extensive tests based on synthetic and real-world data demonstrate the superiority of our approach over the state-of-the-art non-negative orthogonal greedy algorithms and iterative thresholding methods, in terms of the probability of successful recovery, phase transition, and computational attractiveness.









Similar content being viewed by others
Data Availability
All data included in this study are available upon request by contact with the corresponding author.
References
Berger, C.R., Zhou, S., Preisig, J.C., Willett, P.: Sparse channel estimation for multicarrier underwater acoustic communication: from subspace methods to compressed sensing. IEEE Trans. Signal Process. 58(3), 1708–1721 (2010)
Bioucas-Dias, J.M., Figueiredo, M.A.T.: A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration. IEEE Trans. Image Process. 16(12), 2992–3004 (2007)
Blanchard, J.D., Tanner, J.: Performance comparisons of greedy algorithms in compressed sensing. Numer. Linear Algebra Appl. 22(2), 254–282 (2015)
Blanchard, J.D., Tanner, J., Wei, K.: Conjugate gradient iterative hard thresholding: observed noise stability for compressed sensing. IEEE Trans. Signal Process. 63(2), 528–537 (2015)
Blumensath, T., Davies, M.E.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009)
Blumensath, Thomas, Davies, Mike E.: Normalized iterative hard thresholding: guaranteed stability and performance. IEEE J. Sel. Top. Signal Process. 4(2), 298–309 (2010)
Boufounos, P.T., Baraniuk, R.G.: 1-bit compressive sensing. In: Proc. 42nd Annu. Conf. Inf. Sci. Syst., pp. 16–21 (Mar. 2008)
Bruckstein, A.M., Elad, M., Zibulevsky, M.: On the uniqueness of nonnegative sparse solutions to underdetermined systems of equations. IEEE Trans. Inf. Theory 54(11), 4813–4820 (2008)
Cai, X., Chan, R., Nikolova, M., Zeng, T.: A three-stage approach for segmenting degraded color images: smoothing, lifting and thresholding (SLaT). J. Sci. Comput. 72, 1313–1332 (2017)
Candès, E., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)
Chartrand, R.: Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 14(10), 707–710 (2007)
Chen, X., Liu, J., Wang, Z., Yin, W.: Theoretical linear convergence of unfolded ISTA and its practical weights and thresholds. In: Advances in Neural Inf. Process. Syst., vol. 31 (2018)
Dai, W., Milenkovic, O.: Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 55(5), 2230–2249 (2009)
Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)
Donoho, D.L., Huo, X.: Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inf. Theory 47(7), 2845–2862 (2001)
Donoho, D.L., Maleki, A., Montanari, A.: Message-passing algorithms for compressed sensing. Proc. Nat. Acad. Sci. 106(45), 18914–18919 (2009)
Foucart, S.: Hard thresholding pursuit: an algorithm for compressive sensing. SIAM J. Numer. Anal. 49(6), 2543–2563 (2011)
Foucart, S., Koslicki, D.: Sparse recovery by means of nonnegative least squares. IEEE Signal Process. Lett. 21(4), 498–502 (2014)
Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Birkhäuser, Basel (2013)
Ge, H., Chen, W., Ng, M.K.: New restricted isometry property analysis for \(\ell _1-\ell _2\) minimization methods. SIAM J. Imaging Sci. 14(2), 530–557 (2021)
Geng, T., Sun, G., Xu, Y., He, J.: Truncated nuclear norm minimization based group sparse representation for image restoration. SIAM J. Imaging Sci. 11(3), 1878–1897 (2018)
Georghiades, A., Belhumeur, P., Kriegman, D.: From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 643–660 (2001)
Gross, R., Matthews, I., Cohn, J., Kanade, T., Baker, S.: Multi-pie. Image Vis. Comput. 28(5), 807–813 (2010)
Han, H., Wang, G., Wang, M., Miao, J., Guo, S., Chen, L., Zhang, M., Guo, K.: Hyperspectral unmixing via nonconvex sparse and low-rank constraint. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 13, 5704–5718 (2020)
He, R., Zheng, W.S., Hu, B.G., Kong, X.W.: Two-stage nonnegative sparse representation for large-scale face recognition. IEEE Trans. Neural Netw. Learn. Syst. 24(1), 35–46 (2013)
He, R., Zheng, W.S., Hu, B.G., Kong, X.W.: Nonnegative sparse coding for discriminative semi-supervised learning. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 2849–2856 (2011)
He, Z., Shu, Q., Wang, Y., Wen, J.: A ReLU-based hard-thresholding algorithm for non-negative sparse signal recovery. Signal Process. 215, 109260 (2024)
Herman, M.A., Strohmer, T.: High-resolution radar via compressed sensing. IEEE Trans. Signal Process. 57(6), 2275–2284 (2009)
Huo, L., Chen, W., Ge, H., Ng, M.K.: \(L_1-\beta L_q\) minimization for signal and image recovery. SIAM J. Imaging Sci. 16(4), 1886–1928 (2023)
Iordache, M.D., Bioucas-Dias, J.M., Plaza, A.: Sparse unmixing of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 49(6), 2014–2039 (2011)
Ji, Y., Lin, T., Zha, H.: Mahalanobis distance based non-negative sparse representation for face recognition. In: Proc. Int. Conf. Mach. Learn. Appl., pp. 41–46 (2009)
Lawson, C.L., Hanson, R.J.: Solving Least Squares Problems. SIAM, New Delhi (1995)
Li, S., Xu, L.D., Wang, X.: Compressed sensing signal and data acquisition in wireless sensor networks and internet of things. IEEE Trans. Industr. Inform. 9(4), 2177–2186 (2013)
Liu, J., Zhang, J.: Spectral unmixing via compressive sensing. IEEE Trans. Geosci. Remote Sens. 52(11), 7099–7110 (2014)
Liu, Y., Wu, F., Zhang, Z., Zhuang, Y., Yan, S.: Sparse representation using nonnegative curds and whey. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 3578–3585 (2010)
Lyons, M., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding facial expressions with Gabor wavelets. In: Proc. 3rd IEEE Int. Conf. Autom. Face Gesture Recognit., pp. 200–205 (1998)
Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: Proc. Int. Conf. Mach. Learn., pp. 807–814 (2010)
Nakarmi, U., Rahnavard, N.: BCS: Compressive sensing for binary sparse signals. In: Proc. IEEE Military Commun. Conf., pp. 1–5 (2012)
Needell, D., Tropp, J.: CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009)
Nguyen, T.T., Idier, J., Soussen, C., Djermoune, E.H.: Non-negative orthogonal greedy algorithms. IEEE Trans. Signal Process. 67(21), 5643–5658 (2019)
Nguyen, T.T., Soussen, C., Idier, J., Djermoune, E.H.: K-step analysis of orthogonal greedy algorithms for non-negative sparse representations. Signal Process. 188, 108185 (2021)
Pan, L., Chen, X.: Group sparse optimization for images recovery using capped folded concave functions. SIAM J. Imaging Sci. 14(1), 1–25 (2021)
Pan, L., Zhou, S., Xiu, N., Qi, H.D.: A convergent iterative hard thresholding for nonnegative sparsity optimization. Pac. J. Optim. 13(2), 325–353 (2017)
Parvaresh, F., Vikalo, H., Misra, S., Hassibi, B.: Recovering sparse signals using sparse measurement matrices in compressed DNA microarrays. IEEE J. Sel. Top. Signal Process. 2(3), 275–285 (2008)
Polyak, B.T.: Introduction to Optimization. Optimization Software Inc, New York (1987)
Slawski, M., Hein, M.: Sparse recovery by thresholded non-negative least squares. In: Adv. Neural Inf. Process. Syst., vol. 24 (2011)
Sun, Z.F., Zhou, J.C., Zhao, Y.B., Meng, N.: Heavy-ball-based hard thresholding algorithms for sparse signal recovery. J. Comput. Appl. Math. 430, 115264 (2023)
The Olivetti & Oracle Research Laboratory: the ORL database of faces. https://cam-orl.co.uk/facedatabase.html (1994)
Tropp, J.A., Gilbert, A.C.: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 53(12), 4655–4666 (2007)
Vardi, Y.: Network tomography: estimating source-destination traffic intensities from link data. J. Am. Stat. Assoc. 91(433), 365–377 (1996)
Vo, N.Q., Moran, W., Challa, S.: Nonnegative-least-square classifier for face recognition. In: Proc. Int. Symp. Neural Netw., Adv. Neural Netw., pp. 449–456 (2009)
Wang, Y., He, Z., Zhang, G., Wen, J.: Improved sufficient conditions based on RIC of order 2s for IHT and HTP algorithms. IEEE Signal Process. Lett. 30, 668–672 (2023)
Wang, Y., Zeng, J., Peng, Z., Chang, X., Xu, Z.: Linear convergence of adaptively iterative thresholding algorithms for compressed sensing. IEEE Trans. Signal Process. 63(11), 2957–2971 (2015)
Wang, Z., Bovik, A., Sheikh, H., Simoncelli, E.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
Wen, J., Li, H.: Binary sparse signal recovery with binary matching pursuit. Inverse Probl. 37(6), 065014 (2021)
Wen, J., Zhang, R., Yu, W.: Signal-dependent performance analysis of orthogonal matching pursuit for exact sparse recovery. IEEE Trans. Signal Process. 68, 5031–5046 (2020)
Wen, J., Zhou, Z., Wang, J., Tang, X., Mo, Q.: A sharp condition for exact support recovery with orthogonal matching pursuit. IEEE Trans. Signal Process. 65(6), 1370–1382 (2016)
Wright, J., Ma, Y., Mairal, J., Sapiro, G., Huang, T.S., Yan, S.: Sparse representation for computer vision and pattern recognition. Proc. IEEE 98(6), 1031–1044 (2010)
Wu, T., Shao, J., Gu, X., Ng, M.K., Zeng, T.: Two-stage image segmentation based on nonconvex \(\ell _{2}-\ell _{p}\) approximation and thresholding. Appl. Math. Comput. 403, 126168 (2021)
Xu, J., An, W., Zhang, L., Zhang, D.: Sparse, collaborative, or nonnegative representation: Which helps pattern classification? Pattern Recognit. 88, 679–688 (2019)
Yaghoobi, M., Wu, D., Davies, M.E.: Fast non-negative orthogonal matching pursuit. IEEE Signal Process. Lett. 22(9), 1229–1233 (2015)
Yang, A.Y., Maji, S., Hong, K., Yan, P., Sastry, S.S.: Distributed compression and fusion of nonnegative sparse signals for multiple-view object recognition. In: Proc. Int. Conf. Inf. Fusion, pp. 1867–1874 (2009)
Zhang, S., Wang, J., Shi, W., Gong, Y., Xia, Y., Zhang, Y.: Normalized non-negative sparse encoder for fast image representation. IEEE Trans. Circuits Syst. Video Technol. 29(7), 1962–1972 (2019)
Zhao, Y.B.: Optimal \(k\)-thresholding algorithms for sparse optimization problems. SIAM J. Optim. 30(1), 31–55 (2020)
Zhao, Y.B., Luo, Z.Q.: Improved RIP-based bounds for guaranteed performance of two compressed sensing algorithms. Sci. China Math. 66(5), 1123–1140 (2023)
Zhuang, L., Gao, H., Lin, Z., Ma, Y., Zhang, X., Yu, N.: Non-negative low rank and sparse graph for semi-supervised learning. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 2328–2335 (2012)
Zymnis, A., Boyd, S., Candes, E.: Compressed sensing with quantized measurements. IEEE Signal Process. Lett. 17(2), 149–152 (2010)
Funding
This work was partially supported by NSFC (12271215, 12326378, 11871248, and 12326377).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no Conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Proof of Lemma 4
This lemma can be proved using a similar methodology employed in [53, Lemma 3], we provide its proof here for the convenience of readers. Since \(\left\Vert {u}\right\Vert _{p} \le \left\Vert {u}\right\Vert _{q}\) holds for any vector \({u}\) and \(1\le q \le p\le \infty \), then, to prove (17), we only need to show that
where \(\varOmega \) is the set of indices of the \(K+1\) largest-magnitude entries in \({u}\). Since \(|{\text {supp}}\{{\varvec{x}}\}|\le K<|\varOmega |=K+1\), there exists certain index j such that \(j\in \varOmega \) and \(j\notin {\text {supp}}\{{\varvec{x}}\}\), then we have
Proof of Lemma 5
Proof
According to the Cauchy–Schwarz inequality and \(\left\Vert \varvec{A}_{i}\right\Vert _{2}=1\), one can easily show that
Denote \(\varvec{Q}=\varvec{I}-\rho \varvec{A}^{T}\varvec{A}\) and \(\mathcal {S} ={\text {supp}}\{{\varvec{x}}\}\). Combining (13) and \(\left\Vert \varvec{A}_{i}\right\Vert _{2}=1\) we have \(\left|\varvec{A}_{i}^T\varvec{A}_{j}\right|\le \mu ,~i\ne j\), leading to \(\left|Q_{ij}\right|\le \max \left\{ |1-\rho |,\mu \rho \right\} \) and
\(\square \)
Proof of Lemma 6
Proof
Based on (3), (22), and (23), we can rewrite \(\varvec{z}^{[n]}\) in (12) as
Furthermore, by (9) and line 2 of Algorithm 1, we have \(\varvec{z}^{[n]}={u}^{[n]}+\bar{{u}}^{[n]}\), where
Thus, for any \(i\in \{1,\ldots ,N\}\):
-
a)
If \(z_i^{[n]}>0\), it holds that
$$\begin{aligned} u_i^{[n]}=z_i^{[n]}=x_i+w_i+q_i, \end{aligned}$$which implies \(u_i^{[n]}-x_i=w_i+q_i\) and hence \(\left|u_i^{[n]}-x_i\right|=\left|w_i+q_i\right|\).
-
b)
If \(z_i^{[n]}\le 0\), we have \(u_i^{[n]}=0\) and \(x_i\le -(w_i+q_i)\). Since \(x_i\ge 0\), we have \(\left|u_i^{[n]}-x_i\right|=|x_i|\le |w_i+q_i|\).
This implies \(\left|u^{[n]}_{i}-x_{i}\right|\le |w_{i}+q_{i}|\) for any \(i\in \{1,\ldots ,N\}\), and combining with (17) yield
where (a) follows from (12) and (40), and the last inequality is due to (18), (19), and (24). Hence (21) holds. \(\square \)
Proof of Lemma 7
Proof
To simplify notation, we denote
where \(\mathcal {S}^{p^{[n]}}(\cdot )\) has been defined in (6).
According to the definitions of \(\mathcal {S}^{[n+1]}\) and \(\mathcal {S}\), the left-hand side of (25) is rewritten as:
Each term of the right-hand side of (44) can be analyzed as follows:
-
a)
According to (10), (27), (41), and line 4 of Algorithm 1, for any \(i \in \mathcal {S}^{[n+1]}\), it holds that
$$\begin{aligned} {\text {sign}}\left( x^{[n+1]}_{i}\right) = {\text {sign}}\left( u_{i}^{[n]}\right) ,~\left|x^{[n+1]}_{i}\right|\le \left|u_{i}^{[n]}\right|. \end{aligned}$$Then, we have
$$\begin{aligned} \left\Vert {u}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}}^{[n]}\right\Vert _{2}^2&= \left\Vert {\varvec{x}}^{[n+1]}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}}+{u}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}}^{[n]}-{\varvec{x}}^{[n+1]}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}}\right\Vert _{2}^2 \nonumber \\&\ge \left\Vert {\varvec{x}}^{[n+1]}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}}\right\Vert _{2}^2 + \left\Vert {u}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}}^{[n]}-{\varvec{x}}^{[n+1]}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}}\right\Vert _{2}^2, \end{aligned}$$resulting in
$$\begin{aligned} \left\Vert {\varvec{x}}^{[n+1]}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}}\right\Vert _{2}^2 \le \left\Vert {u}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}}^{[n]}\right\Vert _{2}^2 - \left\Vert {u}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}}^{[n]}-{\varvec{x}}^{[n+1]}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}}\right\Vert _{2}^2. \end{aligned}$$(46) -
b)
For \(\left\Vert {\varvec{x}}^{[n+1]}_{\mathcal {S}}-{\varvec{x}}_{\mathcal {S}}\right\Vert _{2}^2\), we have
$$\begin{aligned} \left\Vert {\varvec{x}}^{[n+1]}_{\mathcal {S}}-{\varvec{x}}_{\mathcal {S}}\right\Vert _{2}^2&= \left\Vert {\varvec{x}}^{[n+1]}_{\mathcal {S}}-{u}_{\mathcal {S}}^{[n]}+{u}_{\mathcal {S}}^{[n]}-{\varvec{x}}_{\mathcal {S}}\right\Vert _{2}^2 \nonumber \\&\le 2\left( \left\Vert {\varvec{x}}^{[n+1]}_{\mathcal {S}}-{u}_{\mathcal {S}}^{[n]}\right\Vert _{2}^2 + \left\Vert {u}_{\mathcal {S}}^{[n]}-{\varvec{x}}_{\mathcal {S}}\right\Vert _{2}^2\right) , \end{aligned}$$(47)where the inequality is due to \((a+b)^2\le 2a^2+2b^2\) for any \(a,b\in \mathbb {R}\). According to lines 3 and 4 of Algorithm 1, (10), and (27), it holds that
$$\begin{aligned} \left\Vert {\varvec{x}}^{[n+1]}_{\mathcal {S}}-{u}_{\mathcal {S}}^{[n]}\right\Vert _{2}^2 \le |\mathcal {S}| \max \limits _{i\in \mathcal {S}}\left|x^{[n+1]}_i-u_i^{[n]}\right|^2 \le K \left|u_{[K+1]}^{[n]}\right|^2. \end{aligned}$$(48)Substituting (48) into (47) yields
$$\begin{aligned} \left\Vert {\varvec{x}}^{[n+1]}_{\mathcal {S}}-{\varvec{x}}_{\mathcal {S}}\right\Vert _{2}^2 \le 2K \left|u_{[K+1]}^{[n]}\right|^2+2\left\Vert {u}_{\mathcal {S}}^{[n]}-{\varvec{x}}_{\mathcal {S}}\right\Vert _{2}^2. \end{aligned}$$(49)
Then, substituting (46) and (49) into (44) results in
where the equality is due to \(\mathcal {S}^{[n+1]}\cup \mathcal {S}= \left( \mathcal {S}^{[n+1]}{\setminus }\mathcal {S}\right) \cup \mathcal {S}\).
According to lines 3 and 4 of Algorithm 1, (10), (27), and (43), it holds that
Then, substituting (51) and (52) into (50) results in
where the last inequality is due to (17), (27), and \(\mathcal {S}^{[n+1]}={\text {supp}}\left\{ {\varvec{x}}^{[n+1]}\right\} \) is also the set of indices of the K largest-magnitude entries in \({u}^{[n]}\).
For the right-hand side of (45), we carry out the following derivation:
where the inequality is due to \((a+b)^2\le 2a^2+2b^2\) for any \(a,b\in \mathbb {R}\), and the last equality follows from \(\mathcal {S}^{[n+1]}\cup \mathcal {S}=\mathcal {S}^{[n+1]}\cup \left( \mathcal {S}\setminus \mathcal {S}^{[n+1]}\right) \).
According to lines 3 and 4 of Algorithm 1, (27), and (43), it holds that
Substituting them into (54) yields
where (a) is due to (17) and \(\mathcal {S}^{[n+1]}={\text {supp}}\left\{ {\varvec{x}}^{[n+1]}\right\} \) is also the set of indices of the K largest-magnitude entries in \({u}^{[n]}\), and the last inequality follows from (27).
By combining (53) and (55), we have
where \(\nu ^{[n]}\) and \(\nu ^{*}\) have been defined in (26). Then, we prove (25) by taking the square root on both sides of (56). \(\square \)
Proof of Theorem 1
Proof
According to (27), it holds that
Now we consider two cases (i.e., \(z_{i}^{[n]} > 0\) and \(z_{i}^{[n]} \le 0\)) for the right-hand side of (57) based on (41):
-
a)
For any \(i\in \{1,\ldots ,N\}\), if \(z_{i}^{[n]} > 0\), we have \(u_{i}^{[n]}=z_{i}^{[n]}\) and \(\bar{u}_{i}^{[n]}=0\), yielding
$$\begin{aligned} \left|u_{i}^{[n]}-x_{i}\right|^2=\left|z_{i}^{[n]}-x_{i}\right|^2 \overset{(40)}{=}|w_{i}+q_{i}|^2 =|w_{i}+q_{i}|^2-\left|\bar{u}_{i}^{[n]}\right|^2. \end{aligned}$$(58) -
b)
For any \(i\in \mathcal {S}^{[n+1]}{\setminus } \mathcal {S}_{+}^{[n]}\), if \(z_{i}^{[n]} \le 0\), we have \(u_{i}^{[n]}=0\) and \(\bar{u}_{i}^{[n]}=z_{i}^{[n]}\), resulting in
$$\begin{aligned} \left|u_{i}^{[n]}-x_{i}\right|^2\overset{(a)}{=} & 0 \nonumber \\= & |w_{i}+q_{i}|^2-|w_{i}+q_{i}|^2 \nonumber \\\overset{(40)}{=} & |w_{i}+q_{i}|^2-\left|z_{i}^{[n]}-x_{i}\right|^2 \nonumber \\= & |w_{i}+q_{i}|^2-\left|z_{i}^{[n]}\right|^2 \nonumber \\= & |w_{i}+q_{i}|^2-\left|\bar{u}_{i}^{[n]}\right|^2, \end{aligned}$$(59)where (a) is due to \(u_{i}^{[n]}=0\) and \(x_{i}=0\).
Furthermore, by (27), we have \(\mathcal {S}_{+}^{[n]}=\left( {\text {supp}}\left\{ {u}^{[n]}_{[K+1]}\right\} {\setminus } \mathcal {S}\right) \cup \mathcal {S}\).
Then, for \(i\in {\text {supp}}\left\{ {u}^{[n]}_{[K+1]}\right\} {\setminus } \mathcal {S}\), if \(z_{i}^{[n]} \le 0\), it holds that \(u_{i}^{[n]}=0\), \(\bar{u}_{i}^{[n]}=z_{i}^{[n]}\), and \(x_{i}=0\). By using the same derivation as in (59), we attain
$$\begin{aligned} \left|u_{i}^{[n]}-x_{i}\right|^2=|w_{i}+q_{i}|^2-\left|\bar{u}_{i}^{[n]}\right|^2. \end{aligned}$$(60)For \(i\in \mathcal {S}\), if \(z_{i}^{[n]} \le 0\), we have
$$\begin{aligned} \left|u_{i}^{[n]}-x_{i}\right|^2=\left|x_{i}\right|^2\overset{(a)}{\le } & |w_{i}+q_{i}|^2-|w_{i}+x_{i}+q_{i}|^2 \nonumber \\\overset{(40)}{=} & |w_{i}+q_{i}|^2-\left|z_{i}^{[n]}\right|^2 \nonumber \\= & |w_{i}+q_{i}|^2-\left|\bar{u}_{i}^{[n]}\right|^2, \end{aligned}$$(61)where (a) comes from:
$$\begin{aligned} |w_{i}+q_{i}|^2&=|(w_{i}+q_{i}+x_{i})-x_{i}|^2\\&=|w_{i}+q_{i}+x_{i}|^2+|x_{i}|^2-2(w_{i}+q_{i}+x_{i})x_{i}\\&\ge |w_{i}+q_{i}+x_{i}|^2+|x_{i}|^2, \end{aligned}$$and the last inequality follows from \(x_{i}>0\) and \(z_{i}^{[n]}\overset{(40)}{=}w_{i}+q_{i}+x_{i} \le 0\) for \(i \in \mathcal {S}\).
Substituting (58), (59), (60), and (61) into (57) results in
Then, by combining (25) and (62), we have
where \(\nu ^{*}\) is defined in (26).
Taking the square root on both sides of (63), we attain
where (a) follows from (22) and (23), and (b) is due to (14), (15), (27), and
When \(\delta \) fulfills (28), the range of \(\beta \) in (29) is well-defined. Furthermore, it can be verified that:
By the above inequality, we have \(\frac{\sqrt{\nu ^{*}}(1+2\beta )-1}{\sqrt{\nu ^{*}}(1-\delta _{3K+1})}< \frac{\sqrt{\nu ^{*}}+1}{\sqrt{\nu ^{*}}(\delta _{3K+1}+1)}\), implying that the range of \(\alpha \) in (29) is also well-defined. Therefore, for any fixed \(\delta \) and \(\beta \), there exists certain \(\alpha \) such that either \(\frac{\sqrt{\nu ^{*}}(1+2\beta )-1}{\sqrt{\nu ^{*}}(1-\delta _{3K+1})}<\alpha \le 1+\beta \) or \(1+\beta<\alpha <\frac{\sqrt{\nu ^{*}}+1}{\sqrt{\nu ^{*}}(\delta _{3K+1}+1)}\). Then, we carry out the following derivation:
-
a)
For \(\frac{\sqrt{\nu ^{*}}(1+2\beta )-1}{\sqrt{\nu ^{*}}(1-\delta _{3K+1})}<\alpha \le 1+\beta \), it holds that
$$\begin{aligned}&\sqrt{\nu ^{*}}(|1-\alpha + \beta | + \alpha \delta _{3K+1})\nonumber \\&\quad =\sqrt{\nu ^{*}}(1 + \beta - \alpha (1-\delta _{3K+1})) \nonumber \\&\quad =\sqrt{\nu ^{*}}\left( 1 + \beta - \frac{\sqrt{\nu ^{*}}(1+2\beta )-1}{\sqrt{\nu ^{*}}(1-\delta _{3K+1})}(1-\delta _{3K+1})\right) \nonumber \\&\quad =1-\sqrt{\nu ^{*}}\beta . \end{aligned}$$ -
b)
For \(1+\beta<\alpha <\frac{\sqrt{\nu ^{*}}+1}{\sqrt{\nu ^{*}}(\delta _{3K+1}+1)}\), it holds that
$$\begin{aligned}&\sqrt{\nu ^{*}}(|1-\alpha + \beta | + \alpha \delta _{3K+1})\nonumber \\&\quad =\sqrt{\nu ^{*}}(-1 - \beta + \alpha (1+\delta _{3K+1})) \nonumber \\&\quad =\sqrt{\nu ^{*}}\left( -1 - \beta + \frac{\sqrt{\nu ^{*}}+1}{\sqrt{\nu ^{*}}(\delta _{3K+1}+1)}(1+\delta _{3K+1})\right) \nonumber \\&\quad =1-\sqrt{\nu ^{*}}\beta . \end{aligned}$$
Hence, we have \(\sqrt{\nu ^{*}}(|1-\alpha + \beta | + \alpha \delta _{3K+1}+\beta )<1\), combining this with (64) and Lemma 3 give (30) and (32). \(\square \)
Proof of Theorem 2
Proof
We first define \(\varvec{h}\in \mathbb {R}^{N}\) as:
where \(\mathcal {S}^{[n+1]}\) and \(\varLambda ^{[n]}\) are defined in (27) and (43), respectively.
According to the definition of \(\mathcal {S}^{[n+1]}\) and \(\mathcal {S}\), we rewrite \(\left\Vert {\varvec{x}}^{[n+1]}-{\varvec{x}}\right\Vert _{1}\) as
Then, we analyze each term of the right-hand side of (66).
-
(a)
According to (27), we have
$$\begin{aligned} & \left\| \varvec{x}^{[n+1]}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}}- \varvec{x}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}}\right\| _{1}= \left\| \varvec{x}^{[n+1]}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}}\right\| _{1} \nonumber \\ & \quad \overset{(a)}{=}\ \left\| {u}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}}^{[n]}-\left|u_{[K+1]}^{[n]}\right|{\text {sign}}\left( {u}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}}^{[n]}\right) \varvec{h}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}} \right\| _{1} \nonumber \\ & \quad \overset{(b)}{=}\left\| {u}_{\varLambda ^{[n]} \setminus \mathcal {S}}^{[n]}\right\| _{1}+\left\| {u}_{\mathcal {S}^{[n+1]}\setminus (\mathcal {S}\cup \varLambda ^{[n]})}^{[n]} \right. \nonumber \\ & \qquad \left. -\left|u_{[K+1]}^{[n]}\right|{\text {sign}}\left( {u}_{\mathcal {S}^{[n+1]}\setminus (\mathcal {S}\cup \varLambda ^{[n]})}^{[n]}\right) \varvec{h}_{\mathcal {S}^{[n+1]}\setminus (\mathcal {S}\cup \varLambda ^{[n]})}\right\| _{1} \nonumber \\ & \quad \overset{(c)}{=}\left\| {u}_{\varLambda ^{[n]} \setminus \mathcal {S}}^{[n]}\right\| _{1}+ \left\| {u}_{\mathcal {S}^{[n+1]}\setminus (\mathcal {S}\cup \varLambda ^{[n]})}^{[n]}\right\| _{1} \nonumber \\ & \qquad - \left|\mathcal {S}^{[n+1]}\setminus \left( \mathcal {S}\cup \varLambda ^{[n]}\right) \right|\left|u_{[K+1]}^{[n]}\right|\nonumber \\ & \quad =\left\| {u}_{\mathcal {S}^{[n+1]}\setminus \mathcal {S}}^{[n]}\right\| _{1} - \left| \mathcal {S}^{[n+1]}\setminus \left( \mathcal {S}\cup \varLambda ^{[n]}\right) \right| \left| u_{[K+1]}^{[n]}\right| , \end{aligned}$$(67)where (a) follows from lines 3 and 4 of Algorithm 1, (43), and (65), (b) and the last equality can be easily verified since \(\varLambda ^{[n]} \subset \mathcal {S}^{[n+1]}\), and (c) is due to: for \(i\in \mathcal {S}^{[n+1]}{\setminus }\left( \mathcal {S}\cup \varLambda ^{[n]}\right) \), it holds that
$$\begin{aligned}&\left|u_i^{[n]}-\left|u_{[K+1]}^{[n]}\right|{\text {sign}}\left( u_i^{[n]}\right) h_i\right|\\&\qquad \overset{(65)}{=} {\left\{ \begin{array}{ll} \big | u_i^{[n]}-\big | u_{[K+1]}^{[n]}\big |\big | =u_i^{[n]}-\left|u_{[K+1]}^{[n]}\right|=\left|u_i^{[n]}\right|-\left|u_{[K+1]}^{[n]}\right|, & u_i^{[n]}>0\\ \big | u_i^{[n]}+\big | u_{[K+1]}^{[n]}\big |\big | =-u_i^{[n]}-\left|u_{[K+1]}^{[n]}\right|=\left|u_i^{[n]}\right|-\left|u_{[K+1]}^{[n]}\right|, & u_i^{[n]}<0 \end{array}\right. }. \end{aligned}$$ -
b)
Based on (27), it holds that
$$\begin{aligned} \left\Vert \left( {\varvec{x}}^{[n+1]}-{\varvec{x}}\right) _{\mathcal {S}}\right\Vert _{1}\le & \left\Vert \left( {\varvec{x}}^{[n+1]}-{u}^{[n]}\right) _{\mathcal {S}}\right\Vert _{1}+\left\Vert \left( {u}^{[n]}-{\varvec{x}}\right) _{\mathcal {S}}\right\Vert _{1} \nonumber \\\le & \left|\mathcal {S}\setminus \varLambda ^{[n]}\right|\left|u_{[K+1]}^{[n]}\right|+\left\Vert \left( {u}^{[n]}-{\varvec{x}}\right) _{\mathcal {S}}\right\Vert _{1}, \end{aligned}$$(68)where the last inequality follows from lines 3 and 4 of Algorithm 1 and (43).
Substituting (67) and (68) into (66) results in
where the last inequality follows from (20) and \(\left|\mathcal {S}\setminus \varLambda ^{[n]}\right|\le K\).
For \(\Vert (\varvec{w}+\varvec{q})_{\mathcal {S}^{[n+1]}\cup \mathcal {S}}\Vert _{1}\), we have
where (a) follows from (22), (23), and \(\left|\mathcal {S}^{[n+1]}\cup \mathcal {S}\right|\le 2K\), and the last inequality is due to (18), (19), and (24).
Substituting (21) and (70) into (69) yields
where \(\gamma \) has been defined in (24).
According to (24), when \(\mu \) satisfies (33) while \(\alpha \) and \(\beta \) fulfill (34), it holds that \(\gamma =\mu \alpha \), resulting in
where the inequality follows from the range of \(\alpha \) in (34). This implies \(3\alpha \mu K+(K+1)\beta <1\), and combining with (71) and Lemma 3 yield (35) and (36). Hence, we prove Theorem 2. \(\square \)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
He, Z., Wang, ZY., Wen, J. et al. Non-negative Sparse Recovery via Momentum-Boosted Adaptive Thresholding Algorithm. J Sci Comput 101, 47 (2024). https://doi.org/10.1007/s10915-024-02660-9
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10915-024-02660-9
Keywords
- Non-negative sparse recovery
- Iterative thresholding algorithm
- Restricted isometric property
- Mutual coherence