Skip to main content
Log in

Preconditioning for Orthogonal Matching Pursuit with Noisy and Random Measurements: The Gaussian Case

  • Short Paper
  • Published:
Circuits, Systems, and Signal Processing Aims and scope Submit manuscript

Abstract

The success of orthogonal matching pursuit (OMP) in the sparse signal recovery heavily depends on its ability for correct support recovery. Based on a support recovery guarantee for OMP expressed in terms of the mutual coherence, and a result about the concentration of the extreme singular values of a Gaussian random matrix, this paper proposes a preconditioning method for increasing the recovery rate of OMP from random and noisy measurements. Compared to several existing preconditionings, the proposed method can reduce the mutual coherence with a proven high probability. Simultaneously, the proposed preconditioning can also succeed with a high probability in providing slight signal-to-noise ratio reduction, which is empirically shown to be less severe than that caused by a recently suggested technique for the noisy case. The simulations show the advantages of the proposed preconditioning over other currently relevant ones in terms of both the performance improvement for OMP, and computation time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. We call the ratio \(x_{\min }/\Vert \mathbf {e}\Vert _{2}\) the signal-to-noise ratio (SNR) for OMP. And according to Theorem 1, we know that the higher the SNR for OMP is, the better OMP will perform.

  2. According to the footnote for Theorem 1 and after normalizing the matrix \(\mathbf {P\Phi }\) in \(\mathbf {Py}\) so as to utilize Theorem 1, this quantity is just the reciprocal of the SNR for pre1-OMP.

  3. See the footnote for the fact ➀ in the proof of Theorem 5.

  4. Note that for a given \(m\times n\) matrix \(\mathbf {A}\) all of whose columns are nonzero, \(\mu \,(\mathbf {A})\) equals \(\mu \,(\mathbf {AD})\) where \(\mathbf {D}\) is a diagonal matrix with jth diagonal element of the form \(\Vert \mathbf {\alpha }_{j}\Vert _{2}^{-1}\), because by the definition of the mutual coherence, \(\mu \,(\mathbf {A})=\max _{1\le j<k\le n}\,\left| \langle \mathbf {\alpha }_{j},\mathbf {\alpha }_{k}\rangle \right| /(\Vert \mathbf {\alpha }_{j}\Vert _{2}\cdot \Vert \mathbf {\alpha }_{k}\Vert _{2})\) and \(\mu \,(\mathbf {AD})\doteq \max _{1\le j<k\le n}\,\left| \langle \mathbf {\alpha }_{j}/\Vert \mathbf {\alpha }_{j}\Vert _{2},\mathbf {\alpha }_{k}/\Vert \mathbf {\alpha }_{k}\Vert _{2}\rangle \right| \).

    Here, all the columns of the matrix \(\mathbf {\Phi }\in E_{4}^{\mathrm {c}}\) are nonzero, and invertible is the preconditioner \(\mathbf {P}\) whose existence is guaranteed by the fact that \(\mathbf {\Phi }\in E_{\epsilon }\), then \(\mathbf {P\varphi }_{j}\ne \mathbf {0}\) for all \(1\le j\le n\).

  5. After normalizing the matrix \(\mathbf {\Phi }\) (Theorem 2 is only applicable to this case), and then comparing Theorems 1 and 2, this can be similarly considered as the SNR for mod-OMP.

  6. Here, we use without explanation both the zero-one and the uniform sparse signals, which are introduced in Sect. 4, and the reasons why we select these two types of signals can also be found in that Sect. 4.

  7. Although Theorem 3 is applicable to the case of \(n>9m\), the proposed preconditioner can be effective even under a more relaxed condition \(n=2m\), as demonstrated in Fig. 3a.

  8. That is the reason why we do not apply another commonly used type of test sparse signal–Gaussian sparse signals [6], i.e., the nonzero entries are independently drawn from the distribution N(0, 1), since, in the descending order, the sorted amplitudes of the components of such a signal decay fast with respect to the sorted indices, and then, \(x_{\min }\approx 0\).

  9. We also run the mod-OMP with the same experiment setups and record the recovery rates in Fig. 1, so as to facilitate the subsequent discussion in Sect. 4.2.

  10. A little smaller variance is just enough to produce a strong noise environment for the zero-one case, because of its smaller nonzero minimum magnitude \(x_{\min }=1\).

  11. For \(\mathbf {e}\sim N(\mathbf {0},\sigma _{e}^{2}\mathbf {I}_{m})\), we can obtain \(\Pr \,(\vert \Vert \mathbf {e}\Vert _{2}^{2}-m\sigma _{e}^{2}\vert \le m\sigma _{e}^{2})\ge 1-\exp \,(-m/9)\) from Theorem 9 in Appendix.

  12. It was shown in [20] that chaotic measurement matrices have the same performance as that of Gaussian measurement matrices, but is easy to be physically implemented, and only one initial state is necessary to be memorized.

References

  1. Z. Ben-Haim, Y.C. Eldar, M. Elad, Coherence-based performance guarantees for estimating a sparse vector under random noise. IEEE Trans. Signal Process. 58, 5030–5043 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  2. M. Bucolo, R. Caponetto, L. Fortuna, M. Frasca, A. Rizzo, Does chaos work better than noise? IEEE Circuits Syst. Mag. 2, 4–19 (2002)

    Article  Google Scholar 

  3. T.T. Cai, L. Wang, Orthogonal matching pursuit for sparse signal recovery with noise. IEEE Trans. Inf. Theory 57, 4680–4688 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  4. E.J. Candes, T. Tao, Decoding by linear programming. IEEE Trans. Inf. Theory 51, 4203–4215 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  5. Y.T. Chen, J.G. Peng, Influences of preconditioning on the mutual coherence and the restricted isometry property of Gaussian/Bernoulli measurement matrices. Linear Multilinear Algebra 64, 1750–1759 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  6. W. Dai, O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 55, 2230–2249 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  7. K.R. Davidson, S.J. Szarek, Local operator theory, random matrices and Banach spaces, in Handbook of the Geometry of Banach Spaces, ed. by W.B. Johnson, J. Lindenstrauss (Elsevier, Amsterdam, 2001), pp. 317–366

    Chapter  Google Scholar 

  8. D.L. Donoho, M. Elad, V.N. Temlyakov, Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inf. Theory 52, 6–18 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  9. M. Elad, Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing (Springer, New York, 2010), p. 25, 25, 98

    Book  MATH  Google Scholar 

  10. S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing (Springer, New York, 2013), pp. 65–69, 69–70

  11. M. Lin, G. Sinnamon, The generalized Wielandt inequality in inner product spaces. Eurasian Math. J. 3, 72–85 (2012)

    MathSciNet  MATH  Google Scholar 

  12. K. Schnass, P. Vandergheynst, Average performance analysis for thresholding. IEEE Signal Process. Lett. 14, 828–831 (2007)

    Article  Google Scholar 

  13. K. Schnass, P. Vandergheynst, Dictionary preconditioning for greedy algorithms. IEEE Trans. Signal Process. 56, 1994–2002 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  14. A.M. Tillmann, M.E. Pfetsch, The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inf. Theory 60, 1248–1259 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  15. J.A. Tropp, Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50, 2231–2242 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  16. J.A. Tropp, On the conditioning of random subdictionaries. Appl. Comput. Harmon. Anal. 25, 1–24 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  17. J.A. Tropp, A.C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 53, 4655–4666 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  18. E. Tsiligianni, L.P. Kondi, A.K. Katsaggelos, Preconditioning for underdetermined linear systems with sparse solutions. IEEE Signal Process. Lett. 22, 1239–1243 (2015)

    Article  Google Scholar 

  19. M.J. Wainwright, Information theoretic limits on sparsity recovery in the high dimensional and noisy setting. IEEE Trans. Inf. Theory 55, 5728–5741 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  20. L. Yu, J.P. Barbot, G. Zheng, H. Sun, Compressive sensing with chaotic sequence. IEEE Signal Process. Lett. 17, 731–734 (2010)

    Article  Google Scholar 

  21. J. Zhao, X. Bai, S.H. Bi, R. Tao, Coherence-based analysis of modified orthogonal matching pursuit using sensing dictionary. IET Signal Process. 9, 218–225 (2015)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Contacts 11131006, 41390450 and 91330204, in part by EU FP7-IRSES Project LIVCODE under Grant 295151, and in part by the National Basic Research Program of China under Contact 2013CB329404. We would like to thank Evaggelia Tsiligianni for providing the code implementing the preconditioning in his or her own work, and patiently discussing the details about the work. Special thanks are due to Karin Schnass for instructive discussions on her work. We would also like to thank the referees for numerous suggestions which helped to clarify the exposition and argumentation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jigen Peng.

Appendix: Some Necessary Results for the Proofs

Appendix: Some Necessary Results for the Proofs

For derivations in Sect. 3, we need several results summarized in the theorems below.

Theorem 6

(Theorem 3.4 in [11]) Let \(\mathbf {A}\in {\mathbb {R}}^{m\times m}\) be a symmetric positive definite matrix with eigenvalues \(\lambda _{1}\ge \cdots \ge \lambda _{m}>0\). For any two nonzero independent vectors \(\mathbf {x},\mathbf {y}\in {\mathbb {R}}^{m}\), \(\theta \in (0,\pi /2]\) is the angle between the lines corresponding to \(\mathbf {x}\) and \(\mathbf {y}\) such that \(\Vert \mathbf {x}\Vert _{2}\Vert \mathbf {y}\Vert _{2}\cos \,\theta =\left| \mathbf {x}^{\mathrm {T}}\mathbf {y}\right| \). Then,

$$\begin{aligned} \frac{\left| \mathbf {x}^{\mathrm {T}}\mathbf {Ay}\right| }{\left( \mathbf {x}^{\mathrm {T}}\mathbf {Ax}\right) ^{1/2}\cdot \left( \mathbf {y}^{\mathrm {T}}\mathbf {Ay}\right) ^{1/2}} \le \frac{\chi +\cos \,\theta }{1+\chi \cos \,\theta } =\frac{\chi +\left| \mathbf {x}^{\mathrm {T}}\mathbf {y}\right| / \left( \Vert \mathbf {x}\Vert _{2}\cdot \Vert \mathbf {y}\Vert _{2}\right) }{1+\chi \cdot \left| \mathbf {x}^{\mathrm {T}}\mathbf {y}\right| /\left( \Vert \mathbf {x}\Vert _{2} \cdot \Vert \mathbf {y}\Vert _{2}\right) }, \end{aligned}$$

where \(\chi \doteq (\lambda _{1}/\lambda _{m}-1)/(\lambda _{1}/\lambda _{m}+1)\).

Theorem 7

(The concentration of the singular values of a Gaussian random matrix in [7]) Suppose that \(\mathbf {\Phi }\in {\mathbb {R}}^{m\times n}\), \(m<n\), is a Gaussian random matrix whose entries \(\varphi _{ij}\buildrel {\textit{i.i.d.}}\over {\sim }N(0,m^{-1})\), the singular values of the matrix \(\mathbf {\Phi }\) satisfy

$$\begin{aligned}&\Pr \,\left\{ \sqrt{n/m}(1-\epsilon )-1\le \sigma _{i}\le 1 +\sqrt{n/m}(1+\epsilon ),i\in [m]\right\} \ge 1-2\exp \,(-n\epsilon ^{2}/2),\\&\quad \forall \,\epsilon >0. \end{aligned}$$

Theorem 8

(Appendix 1 in [5]) Suppose that \(\mathbf {\Phi }\) is an \(m\times n\) Gaussian measurement matrix, for any \(0<\epsilon <1\), and a constant \(a\in (0,1)\),

$$\begin{aligned} \Pr \,\{\mu \,(\mathbf {\Phi })\ge \epsilon \} \le n(n-1) \left[ \exp \,\left( -\frac{ma^{2}\epsilon ^{2}}{4(1+a\epsilon /2)} \right) +\exp \,\left( -\frac{m}{4}(1-a)^{2}\right) \right] . \end{aligned}$$

Theorem 9

(Appendix D in [19]) Given a centralized \(\chi ^{2}\)-variable Z with m degrees of freedom, then \(\forall \,t>0\), there are

$$\begin{aligned} \Pr \, \{ Z-m\le -2\sqrt{mt} \} \le \exp \,(-t), \quad \text {and}\quad \Pr \,\{Z-m\ge 2\sqrt{mt}+2t\} \le \exp \,(-t). \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, Y., Peng, J. & Yue, S. Preconditioning for Orthogonal Matching Pursuit with Noisy and Random Measurements: The Gaussian Case. Circuits Syst Signal Process 37, 4109–4127 (2018). https://doi.org/10.1007/s00034-017-0730-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00034-017-0730-3

Keywords

Navigation