Abstract
This paper considers the orthogonal matching pursuit (OMP) algorithm for sparse recovery in both noiseless and noisy cases when the partial prior information is available. The prior information is included in an estimated subset of the support of the sparse signal. First, we show that if \(\varvec{A}\) satisfies \(\delta _{k+b+1}<\frac{1}{\sqrt{k-g+1}}\), then the OMP algorithm can perfectly recover any k-sparse signal \(\varvec{x}\) from \(\varvec{y}=\varvec{Ax}\) in \(k-g\) iterations when the prior support of \(\varvec{x}\) includes g true indices and b wrong indices. Furthermore, we show that the condition \(\delta _{k+b+1}<\frac{1}{\sqrt{k-g+1}}\) is optimal. Second, we achieve the exact recovery of the remainder support (i.e., it is composed of indices in the true support of \(\varvec{x}\) but not in the prior support) from \(\varvec{y}=\varvec{Ax}+\varvec{v}\) under appropriate conditions. On the other hand, for the remainder support recovery, we also obtain a necessary condition based on the minimum magnitude of nonzero elements in the remainder support of \(\varvec{x}\). Compared to the OMP algorithm, numerical experiments demonstrate that the OMP algorithm with the partial prior information has better recovery performance.



Similar content being viewed by others
References
A.S. Bandeira, K. Scheinberg, L.N. Vicente, On partially sparse recovery, Technical Report, Department of Mathematics, University of Coimbra (2011)
R. Baraniuk, P. Steeghs, Compressive radar imaging, in Proceedings of the IEEE Radar Conference, pp. 128–133 (2007)
R.F.V. Borries, C.J. Miosso, C. Potes, Compressed sensing using prior information, in 2nd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, CAMPSAP 2007, IEEE, pp. 121–124
T. Cai, A. Zhang, Spares representation of a polytope and recovery of sparse signals and low-rank matrices. IEEE Trans. Inf. Theory 60(1), 122–132 (2014)
E.J. Candès, T. Tao, Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)
W. Chen, Y. Li, G. Wu, Recovery of signals under the high order RIP condition via prior support information. Signal Process. 153, 83–94 (2018)
W. Dai, O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 55(5), 2230–2249 (2009)
M.A. Davenport, J.N. Laska, J.R. Treichler, R.G. Baraniuk, The pros and cons of compressive sensing for wideband signal acquisition: noise folding versus dynamic range. IEEE Trans. Signal Process. 60(9), 4628–4642 (2012)
S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing (Birkhäuser, Basel, 2013)
M.P. Friedlander, H. Mansour, R. Saab, O. Yilmaz, Recovering compressively sampled signals using partial support information. IEEE Trans. Inf. Theory 58(2), 1122–1134 (2012)
H. Ge, W. Chen, Recovery of signals by a weighted $\ell _2/\ell _1$ minimization under arbitrary prior support information. Signal process. 148, 288–302 (2018)
M.A. Herman, T. Strohmer, High-resolution radar via compressed sensing. IEEE Trans. Signal Process. 57(6), 2275–2284 (2009)
C. Herzet, C. Soussen, J. Idier, R. Gribonval, Exact recovery conditions for sparse representations with partial support information. IEEE Trans. Inf. Theory 59(11), 7509–7524 (2013)
L. Jacques, A short note on compressed sensing with partially known signal support. Signal Process. 90(12), 3308–3312 (2010)
N.B. Karahanoglu, H. Erdogan, On the theoretical analysis of orthogonal matching pursuit with termination based on the residue. Exp. Brain Res. 187(1), 71–84 (2012)
N.B. Karahanoglu, H. Erdogan, Online recovery guarantees and analytical results for OMP. arXiv:1210.5991v2 (2013)
M.A. Khajehnejad, W. Xu, A.S. Avestimehr, B. Hassibi, Weighted $\ell _1$ minimization for sparse recovery with prior information, in IEEE International Symposium on Information Theory (ISIT) 2009, IEEE, pp. 483–487, June 2009
B. Li, Y. Shen, Z. Wu, J. Li, Sufficient conditions for generalized orthogonal matching pursuit in noisy case. Signal Process. 143, 111–123 (2015)
H. Liu, J. Peng, Sparse signal recovery via alternating projection method. Signal Process. 143, 161–170 (2018)
H. Liu, Y. Ma, Y. Fu, An improved RIP-based performance guarantee for sparse signal recovery via simultaneous orthogonal matching pursuit. Signal Process. 144, 29–35 (2018)
W. Lu, N. Vaswani, Exact reconstruction conditions and error bounds for regularized modified basis pursuit, in Proceedings of the Asilomar Conference on Signals, Systems and Computers (2010)
M. Lustig, D.L. Donoho, J.M. Santos, J.M. Pauly, Compressed sensing MRI. IEEE Signal Process. Mag. 25(2), 72–82 (2008)
Q. Mo, A sharp restricted isometry constant bound of orthogonal matching pursuit. arXiv:1501.01708 v1[cs.IT] (2015)
L.B. Montefusco, D. Lazzaro, S. Papi, A fast algorithm for nonconvex approaches to sparse recovery problems. Signal Process. 93, 2636–2647 (2013)
D. Needell, J.A. Tropp, CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2008)
D. Needell, R. Saab, T. Woolf, Weighted $\ell _1$-minimization for sparse recovery under arbitrary prior information. Inf. Inference J. IMA 6, 284C309 (2017)
J.A. Tropp, A.C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 53(12), 4655–4666 (2007)
J.A. Tropp, J.N. Laska, M.F. Duarte, J.K. Romberg, R.G. Baraniuk, Beyond Nyquist: efficient sampling of sparse, bandlimited signals. IEEE Trans. Inf. Theory 56(1), 520–544 (2010)
N. Vaswani, W. Lu, Modified-CS: modifying compressive sensing for problems with partially known support. IEEE Trans. Signal Process. 58(9), 4595–4607 (2010)
J. Wen, J. Wang, Q. Zhang, Nearly optimal bounds for orthogonal least squares. IEEE Trans. Signal Process. 65(20), 5347–5356 (2017)
J. Wen, Z. Zhou, J. Wang, X. Tang, Q. Mo, A sharp condition for exact support recovery of sparse signals with orthogonal matching pursuit. IEEE Trans. Signal Process. 65, 1370–1382 (2017)
R. Zhang, S. Li, A proof of conjecture on restricted isometry property constants $\delta _{tk}$ ($0<t<\frac{4}{3}$). IEEE Trans. Inf. Theory 64(3), 1699–1705 (2018)
Acknowledgements
The authors thank the referees for their valuable suggestion and comments that greatly improve the presentation of this paper. The authors are also thankful to Qun Mo for his meaningful discussion about Remark 3. This work was supported by the NSF of China (Nos. 11871109, 61471343), the National Key Research and Development Program of China (No. 2018YFC2000605), NSAF(Grant No. U1830107) and the Science Challenge Project (Grant TZ2018001).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: The Proof of Lemma 1
Proof
For simplicity, let
and
where \(i_t=\arg \max \nolimits _{i\in (T\cup \Lambda _t)^c}|\langle \varvec{Ae}_i,\varvec{A}_{T\cup \Lambda _t}\varvec{z}_{T\cup \Lambda _t}\rangle |\). By \(|T\cap T_0|=g\), \(|T|=k\), \(\Lambda _t\subseteq (T\cup T_0)\), \(T_0\subseteq \Lambda _t\) and \(|\Lambda _t|=t\), it is clear that the fact \(|T\backslash \Lambda _t|=k-g-t\) holds. Then by (13), one obtains that
where (a) and (c), respectively, follow from \(|T\backslash \Lambda _t|=k-g-t\) and the Hölder inequality, and (b) is due to the fact that \(\varvec{A}_{\Lambda _t}^{\top }\varvec{A}_{T\cup \Lambda _t}\varvec{z}_{T\cup \Lambda _t}=\varvec{0}\).
Let \(s=-\frac{\sqrt{k-g-t+1}-1}{\sqrt{k-g-t}}\) and
then \({\hat{s}}_{i_t}^2=\Vert \varvec{z}_{T\cup \Lambda _t}\Vert _2^2s^2\) and
Further, by (15), (14), \(s^2<1\) and some simple calculations, we derive that
where the last equality is from the definitions of s and \({\hat{s}}_{i_t}\) and
Because \(0\le t< k-g\), \(\varvec{A}\) satisfies the RIP of order \(k+b+1\) with \(\delta _{k+b+1}\), \(|T\cup \Lambda _t|=k+b\) and \(i_t\in (T\cup \Lambda _t)^c\), one obtains that
where the equality is from \(i_t\in (T\cup \Lambda _t)^c\), \(\Vert \varvec{e}_{i_t}\Vert _2=1\) and the fact \({\hat{s}}_{i_t}^2=\Vert \varvec{z}_{T\cup \Lambda _t}\Vert _2^2s^2.\) Combining with (16), we have that
where the equality is due to the definition of s meaning the fact that
\(\square \)
Appendix B: The Proof of Theorem 1
Proof
By the inductive method, we first prove that under the condition (4), i.e., \(\delta _{k+b+1}<\frac{1}{\sqrt{k-g+1}}\), the \(\mathrm {OMP}_{T_0}\) algorithm succeeds in the sense of Definition 2.
For the first iteration, \(\Lambda _0=T_0\) and \(\varvec{r}^{(0)}=\varvec{A}_{T\cup T_0}\varvec{z}_{T\cup T_0}\). By Lemma 1 with \(t=0\) and (4), i.e., \(\delta _{k+b+1}<\frac{1}{\sqrt{k-g+1}}\), we have that
That is,
Then the \(\mathrm {OMP}_{T_0}\) algorithm selects a correct index in the first iteration, i.e., \(j_1\in T{\setminus } T_0\).
Suppose that the \(\mathrm {OMP}_{T_0}\) algorithm has performed t (\(1\le t <k-g\)) iterations successfully, that is, \(\Lambda _{t}{\setminus } T_0 \subseteq T{\setminus } T_0\). For the \((t+1)\)th iteration, by the equality (7) with \(\varvec{v}=\varvec{0}\), one has that
where (a) and (b) follow from Lemma 1 and (4), respectively. Then the \(\mathrm {OMP}_{T_0}\) algorithm makes a success in the \((t+1)\)th iteration, i.e., \(j_{t+1}\in T{\setminus }\Lambda _t\subseteq T{\setminus } T_0\). Therefore, if \(\delta _{k+b+1}<\frac{1}{\sqrt{k-g+1}}\), then the \(\mathrm {OMP}_{T_0}\) algorithm succeeds by Definition 2.
It remains to prove \(\varvec{x}=\hat{\varvec{x}}\), where \(\varvec{{\hat{x}}}\) is the output by the \(\mathrm {OMP}_{T_0}\) algorithm after \(k-g\) successful iterations. As the \(\mathrm {OMP}_{T_0}\) algorithm has performed \(k-g\) iterations successfully, which implies the fact \(\Lambda _{k-g}=T\cup T_0\), one has that
where (a) and (b), respectively, follow from the fact that \(\varvec{A}\) satisfies the RIP of order \(k+b+1\), which means that \(\varvec{A}_{\Lambda _{k-g}}\) is full column rank, and \(\Lambda _{k-g}=T\cup T_0\). We have completed the proof of the theorem. \(\square \)
Appendix C: The Proof of Theorem 2
Proof
For given integers \(k>0\), \(b\ge 0\) and \(0 \le g<k\), let \(\varvec{A}_1\in {\mathbb {R}}^{(k+b+1)\times (k+b+1)}\) be
where \(\beta =\frac{1}{\sqrt{(k-g+1)(k-g)}}\). Then
where \(\gamma =(k-g)\beta ^2=\frac{1}{k-g+1}\). By elementary transformation of determinant, one can verify that
where \(\mu =1-\lambda \). Then the eigenvalues \(\{\lambda _i(\varvec{A}_1^{\top }\varvec{A}_1)\}_{i=1}^{k+b+1}\) of \(\varvec{A}_1^{\top }\varvec{A}_1\) are
Now, taking the matrix
where \(\varvec{A}_2\in {\mathbb {R}}^{(m-k-b-1)\times (n-k-b-1)}\) satisfies \(1-\frac{1}{\sqrt{k-g+1}}\le \lambda ((\varvec{A}^{\top }\varvec{A})_{|S_1|\times |S_2|}) \le 1+\frac{1}{\sqrt{k-g+1}}\) with \(S_1\subseteq [m]\), \(S_2\subseteq [n]\) and \(|S_1|=|S_2|=k+b+1\).
Moreover, by definition of the RIP and [7, Remark 1], the matrix \(\varvec{A}\) satisfies the RIP with
Let k-sparse signal
and the prior support
then
For the first iteration, from (7) with \(t=0\) and \(\varvec{y}=\varvec{A}\bar{\varvec{x}}\) [\(\varvec{A}\) is defined in (19)] it follows that
where the last equality is from (19) and the fact
and \(\varvec{A}_{T_0}^{\top }\varvec{A}_{T{\setminus } T_0}\bar{\varvec{x}}_{T{\setminus } T_0}=\varvec{0}\).
On the other hand, for \(i\in T{\setminus } T_0\), one has
And for \(i\in (T\cup T_0)^c=\{k+b+1\}\), it follows immediately that
Clearly,
which implies the \(\mathrm {OMP}_{T_0}\) algorithm may fail to identify one index of the subset \(T{\setminus } T_0\) in the first iteration. So the \(\mathrm {OMP}_{T_0}\) algorithm may fail for the given matrix \(\varvec{A}\) in (19), the k-sparse signal \(\bar{\varvec{x}}\) and the prior support \(T_0\). \(\square \)
Appendix D: The Proof of Theorem 3
Proof
The proof consists of two parts. In the first part, we show that the \(\mathrm {OMP}_{T_0}\) algorithm selects indices from the remainder support \(T{\setminus } T_0\) in each iteration under the conditions (4) and (5). In the second part, we prove that the \(\mathrm {OMP}_{T_0}\) algorithm exactly performs \(|T{\setminus } T_0|=k-g\) iterations under the stopping rule \(\Vert \varvec{r}^{(t)}\Vert _2\le \varepsilon \).
Part I: By induction method, suppose first that the \(\mathrm {OMP}_{T_0}\) algorithm performed t (\(1 \le t < k-g\)) iterations successfully, that is, \(\Lambda _t\subseteq T\cup T_0\) and \(j_1,\ldots , j_t \in T{\setminus } T_0\). Then by the \(\mathrm {OMP}_{T_0}\) algorithm in Table 1, we need to show \(j_{t+1} \in T{\setminus } \Lambda _t\) which means the \(\mathrm {OMP}_{T_0}\) algorithm makes a success in the \((t+1)\)th iteration. By the fact that \(\varvec{r}^{(t)}\) is orthogonal to each column of \(\varvec{A}_{\Lambda _t}\), we only need to prove that
for the \((t+1)\)th iteration.
From (7), one has that
and
Therefore, to show (20), by (21) and (22), it suffices to prove that
One first gives a lower bound on the left-hand side of (23). From Lemma 1, it follows that
where the second inequality is due to the definition of \(\varvec{z}_{T\cup T_0}\) in (8) and the last inequality is from \(0\le t<k-g\) and the induction assumption \(j_1,\ldots ,j_t\in T{\setminus } T_0\) (i.e., \(|T{\setminus }\Lambda _t|=k-g-t\)) implying
One now gives an upper bound on the right-hand side of (23). There exist the indices \(i^{(t)}\in T{\setminus } \Lambda _t\) and \(i_1^{(t)}\in (T\cup T_0)^c\) satisfying
and
respectively. Therefore,
where (a) follows from \(\varvec{A}\) fulfilling the RIP with order \(k-g+1\) (\(g<k\)) and (b) is because the fact
Combining with (24), (25), it is obvious that (23) holds. Then the \(\mathrm {OMP}_{T_0}\) algorithm selects one index from the subset \(T{\setminus } \Lambda _t\) in the \((t+1)\)th iteration. In conclusion, we have shown that the \(\mathrm {OMP}_{T_0}\) algorithm selects one index from \(T{\setminus } T_0\) in each iteration under the conditions (4) and (5).
Part II: We prove that the \(\mathrm {OMP}_{T_0}\) algorithm performs exactly \(k-g\) iterations. That is, we show that \(\Vert \varvec{r}^{(k-g)}\Vert _2\le \varepsilon \) and \(\Vert \varvec{r}^{(t)}\Vert _2>\varepsilon \) for \(0\le t <k-g\).
Since the \(\mathrm {OMP}_{T_0}\) algorithm selects an index of \(T{\setminus } T_0\) in each iteration under the conditions (4) and (5), \(\Lambda _t\subseteq T\cup T_0\) and \(|\Lambda _t|=k-g+t\) with \(0\le t<k-g\) and \(\Lambda _{k-g}=T\cup T_0\) meaning \(\varvec{P}^{\perp }_{\Lambda _{k-g}}\varvec{A}_T\varvec{x}_T=\varvec{0}\). Moreover,
For \(0\le t<k-g\), by (7) we have
where (a) is because \(\varvec{A}\) satisfies the RIP with order \(k+b+1\), \(|T\cup \Lambda _{t}|=k+b\) and \(\Vert \varvec{P}_{\Lambda _t}^\perp \varvec{e}\Vert _2\le \varepsilon \) and (b) is because of (5). We have completed the proof. \(\square \)
Appendix E: The Proof of Theorem 5
Proof
By (6), the condition (5) holds. Then, from Theorem 3, the conditions (4) (i.e., \(\delta _{k+b+1}<\frac{1}{\sqrt{k-g+1}}\)) and (5) ensure the \(\mathrm {OMP}_{T_0}\) algorithm with the stopping rule \(\Vert \varvec{r}^{(t)}\Vert _2\le \varepsilon \) exactly stops after performing \(k-g\) iterations successfully. Clearly, \(\Lambda _{k-g}=T\cup T_0\). For the \(\mathrm {OMP}_{T_0}\) algorithm in Table 1, by \(\varvec{A}\) satisfying the RIP with order \(k+b+1\) (i.e., \(\varvec{A}_{T\cup T_0}\) is column full rank, which implies \(\varvec{A}^{\dag }_{T\cup T_0}=(\varvec{A}_{T\cup T_0}^{\top }\varvec{A}_{T\cup T_0})^{-1}\)), one has
where
Furthermore,
and
Then, by (28), one obtains
where (a) is from (29) and (5), and (b) follows from (29). In addition, by the definition of RIP, \(\mathrm {supp}(\varvec{x})=T\) and (28) meaning \(\mathrm {supp}(\varvec{x}^{(k-g)})=T\cup T_0\), one has
where (a) is due to (26), and (b) is from (5), (27) and (29). \(\square \)
Appendix F: The Proof of Theorem 4
Proof
The proof below roots in [31]. However, some essential modifications are necessary in order to adapt the results to the sparse signal \(\varvec{x}\) with the prior support \(T_0\). Using proofs by contradiction, we show our result in the theorem. Construct a linear model: \(\varvec{y}=\varvec{Ax}+\varvec{v}\), where \(\varvec{A}\) and \(\varvec{v}\), respectively, satisfy the RIP of order \(k+b+1\) with \(0\le \delta _{k+b+1}(\varvec{A})=\delta _{k+b+1}<1\) and \(\Vert \varvec{v}\Vert _2\le \varepsilon \), and \(\varvec{x}\) is a k-sparse signal with the prior support \(T_0\) and satisfies
such that the \(\mathrm {OMP}_{T_0}\) algorithm may fail to exactly recover the remainder support \(T{\setminus } T_0\) of the signal \(\varvec{x}\) within \(k-g\) iterations.
It is well known that there exist the unit vectors \(\varvec{\xi }^{(1)},\varvec{\xi }^{(2)},\ldots ,\varvec{\xi }^{(k-g-1)}\in {\mathbb {R}}^{k-g}\) such that the matrix
is orthogonal, which implies \(\langle \varvec{\xi }^{(i)}, \varvec{\xi }^{(j)}\rangle =0\) and \(\langle \varvec{\xi }^{(i)}, \varvec{1}_{k-g}\rangle =0\) for \(i,j=1,\ldots ,k-g-1\) and \(i\ne j\), where \(\varvec{1}_{k-g}=(1,\ldots ,1)^{\top }\in {\mathbb {R}}^{k-g}\). Let the matrix
where the submatrix \(\varvec{\Xi }=[\varvec{\xi }^{(1)},\varvec{\xi }^{(2)},\ldots ,\varvec{\xi }^{(k-g-1)}]\),
and \(\tau =\frac{1}{\sqrt{\eta ^2+1}}\). Then \(\varvec{U}\) is also an orthogonal matrix.
Let \(\varvec{D}\in {\mathbb {R}}^{(k+b+1)\times (k+b+1)}\) be a diagonal matrix with
and \(\varvec{A}=\varvec{DU}\), then \(\varvec{A}^{\top }\varvec{A}=\varvec{U}^{\top }\varvec{D}^2\varvec{U}\). In the following, we show that \(\delta _{k+b+1}(\varvec{A})=\delta _{k+b+1}\). For any \(\varvec{x}\in {\mathbb {R}}^{k+b+1}\), setting \(\hat{\varvec{\nu }}=\varvec{Ux}\), we have that
and
where (a) and (b) result of the fact that \(\varvec{U}\) is an orthogonal matrix. Then, based on the definition 1, we have \(\delta _{k+b+1}(\varvec{A})\le \delta _{k+b+1} \). It remains to prove that \(\varvec{A}=\varvec{DU}\) satisfies \(\delta _{k+b+1}(\varvec{A})\ge \delta _{k+b+1}.\) Let the vector
then \(\hat{\varvec{x}}\) is \((k+b+1)\)-sparse and \(\Vert \hat{\varvec{x}}\Vert _2^2=1\). By the definitions of \(\varvec{D}\) and \(\varvec{A}\), we obtain that
Then, \(\delta _{k+b+1}(\varvec{A})\ge \delta _{k+b+1}\). In conclusion, \(\delta _{k+b+1}(\varvec{A})=\delta _{k+b+1}\).
Let the original signal
where \(\theta \) is defined in (30). Then \(\varvec{x}\) in (33) is k-sparse with the support \(T=\{1,2,\ldots ,k\}\), the prior support \(T_0=\{k-g+1,\ldots ,k+b\}\) and satisfies (30). It is not hard to prove that \(\varvec{A}_{T{\setminus } T_0}=\varvec{DU}_{T{\setminus } T_0}\). Moreover, by some simple calculations we derive that
and
where \(\mu =\frac{(1-\delta _{k+b+1})+(1+\delta _{k+b+1})\eta ^2}{\eta ^2+1}\theta \). Similarly, let the error vector
then \(\Vert \varvec{v}\Vert _2\le \varepsilon \) and
By (34) and (35), it is clear that
Therefore, using (7) and the above equality, we obtain that
Therefore, we have that
From (30) and the above inequality, it follows that
which means the \(\mathrm {OMP}_{T_0}\) algorithm may choose a wrong index \(k+b+1\) in the first iteration. That is, the remainder support \(T{\setminus } T_0\) of the signal \(\varvec{x}\) may not be exactly recovered in \(k-g\) iterations by the \(\mathrm {OMP}_{T_0}\) algorithm. We completed the proof. \(\square \)
Appendix G: The Proof of Theorem 6
Proof
By \(g\ge (1-\frac{1}{c^2})(k+1)\), we derive that
Since \(1 \le b\le (c-2)\lceil \frac{k}{2}\rceil \), we have \(k+b+1\le c\lceil \frac{k}{2}\rceil \). Then, \(\delta _{k+b+1}\le \delta _{c\lceil \frac{k}{2}\rceil }\). Therefore, from \(\delta _{cr}<c\cdot \delta _{2r}\) for any positive integers c and r (seeing [25, Corollary 3.4]), the fact \(k+1\ge 2\lceil \frac{k}{2}\rceil \) with \(k\ge 2\), \(\delta _{k+1}<\frac{1}{\sqrt{k+1}}\) and the inequality (36), it follows that
which implies the condition \(\delta _{k+b+1}\) in this paper is weaker than the sufficient condition \(\delta _{k+1}<\frac{1}{\sqrt{k+1}}\). We complete the proof of the theorem. \(\square \)
Rights and permissions
About this article
Cite this article
Ge, H., Chen, W. An Optimal Recovery Condition for Sparse Signals with Partial Support Information via OMP. Circuits Syst Signal Process 38, 3295–3320 (2019). https://doi.org/10.1007/s00034-018-01022-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00034-018-01022-9