Abstract
The success of orthogonal matching pursuit (OMP) in the sparse signal recovery heavily depends on its ability for correct support recovery. Based on a support recovery guarantee for OMP expressed in terms of the mutual coherence, and a result about the concentration of the extreme singular values of a Gaussian random matrix, this paper proposes a preconditioning method for increasing the recovery rate of OMP from random and noisy measurements. Compared to several existing preconditionings, the proposed method can reduce the mutual coherence with a proven high probability. Simultaneously, the proposed preconditioning can also succeed with a high probability in providing slight signal-to-noise ratio reduction, which is empirically shown to be less severe than that caused by a recently suggested technique for the noisy case. The simulations show the advantages of the proposed preconditioning over other currently relevant ones in terms of both the performance improvement for OMP, and computation time.
Similar content being viewed by others
Notes
We call the ratio \(x_{\min }/\Vert \mathbf {e}\Vert _{2}\) the signal-to-noise ratio (SNR) for OMP. And according to Theorem 1, we know that the higher the SNR for OMP is, the better OMP will perform.
See the footnote for the fact ➀ in the proof of Theorem 5.
Note that for a given \(m\times n\) matrix \(\mathbf {A}\) all of whose columns are nonzero, \(\mu \,(\mathbf {A})\) equals \(\mu \,(\mathbf {AD})\) where \(\mathbf {D}\) is a diagonal matrix with jth diagonal element of the form \(\Vert \mathbf {\alpha }_{j}\Vert _{2}^{-1}\), because by the definition of the mutual coherence, \(\mu \,(\mathbf {A})=\max _{1\le j<k\le n}\,\left| \langle \mathbf {\alpha }_{j},\mathbf {\alpha }_{k}\rangle \right| /(\Vert \mathbf {\alpha }_{j}\Vert _{2}\cdot \Vert \mathbf {\alpha }_{k}\Vert _{2})\) and \(\mu \,(\mathbf {AD})\doteq \max _{1\le j<k\le n}\,\left| \langle \mathbf {\alpha }_{j}/\Vert \mathbf {\alpha }_{j}\Vert _{2},\mathbf {\alpha }_{k}/\Vert \mathbf {\alpha }_{k}\Vert _{2}\rangle \right| \).
Here, all the columns of the matrix \(\mathbf {\Phi }\in E_{4}^{\mathrm {c}}\) are nonzero, and invertible is the preconditioner \(\mathbf {P}\) whose existence is guaranteed by the fact that \(\mathbf {\Phi }\in E_{\epsilon }\), then \(\mathbf {P\varphi }_{j}\ne \mathbf {0}\) for all \(1\le j\le n\).
That is the reason why we do not apply another commonly used type of test sparse signal–Gaussian sparse signals [6], i.e., the nonzero entries are independently drawn from the distribution N(0, 1), since, in the descending order, the sorted amplitudes of the components of such a signal decay fast with respect to the sorted indices, and then, \(x_{\min }\approx 0\).
A little smaller variance is just enough to produce a strong noise environment for the zero-one case, because of its smaller nonzero minimum magnitude \(x_{\min }=1\).
It was shown in [20] that chaotic measurement matrices have the same performance as that of Gaussian measurement matrices, but is easy to be physically implemented, and only one initial state is necessary to be memorized.
References
Z. Ben-Haim, Y.C. Eldar, M. Elad, Coherence-based performance guarantees for estimating a sparse vector under random noise. IEEE Trans. Signal Process. 58, 5030–5043 (2010)
M. Bucolo, R. Caponetto, L. Fortuna, M. Frasca, A. Rizzo, Does chaos work better than noise? IEEE Circuits Syst. Mag. 2, 4–19 (2002)
T.T. Cai, L. Wang, Orthogonal matching pursuit for sparse signal recovery with noise. IEEE Trans. Inf. Theory 57, 4680–4688 (2011)
E.J. Candes, T. Tao, Decoding by linear programming. IEEE Trans. Inf. Theory 51, 4203–4215 (2005)
Y.T. Chen, J.G. Peng, Influences of preconditioning on the mutual coherence and the restricted isometry property of Gaussian/Bernoulli measurement matrices. Linear Multilinear Algebra 64, 1750–1759 (2016)
W. Dai, O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 55, 2230–2249 (2009)
K.R. Davidson, S.J. Szarek, Local operator theory, random matrices and Banach spaces, in Handbook of the Geometry of Banach Spaces, ed. by W.B. Johnson, J. Lindenstrauss (Elsevier, Amsterdam, 2001), pp. 317–366
D.L. Donoho, M. Elad, V.N. Temlyakov, Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inf. Theory 52, 6–18 (2006)
M. Elad, Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing (Springer, New York, 2010), p. 25, 25, 98
S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing (Springer, New York, 2013), pp. 65–69, 69–70
M. Lin, G. Sinnamon, The generalized Wielandt inequality in inner product spaces. Eurasian Math. J. 3, 72–85 (2012)
K. Schnass, P. Vandergheynst, Average performance analysis for thresholding. IEEE Signal Process. Lett. 14, 828–831 (2007)
K. Schnass, P. Vandergheynst, Dictionary preconditioning for greedy algorithms. IEEE Trans. Signal Process. 56, 1994–2002 (2008)
A.M. Tillmann, M.E. Pfetsch, The computational complexity of the restricted isometry property, the nullspace property, and related concepts in compressed sensing. IEEE Trans. Inf. Theory 60, 1248–1259 (2014)
J.A. Tropp, Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50, 2231–2242 (2004)
J.A. Tropp, On the conditioning of random subdictionaries. Appl. Comput. Harmon. Anal. 25, 1–24 (2008)
J.A. Tropp, A.C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 53, 4655–4666 (2007)
E. Tsiligianni, L.P. Kondi, A.K. Katsaggelos, Preconditioning for underdetermined linear systems with sparse solutions. IEEE Signal Process. Lett. 22, 1239–1243 (2015)
M.J. Wainwright, Information theoretic limits on sparsity recovery in the high dimensional and noisy setting. IEEE Trans. Inf. Theory 55, 5728–5741 (2009)
L. Yu, J.P. Barbot, G. Zheng, H. Sun, Compressive sensing with chaotic sequence. IEEE Signal Process. Lett. 17, 731–734 (2010)
J. Zhao, X. Bai, S.H. Bi, R. Tao, Coherence-based analysis of modified orthogonal matching pursuit using sensing dictionary. IET Signal Process. 9, 218–225 (2015)
Acknowledgements
This work was supported in part by the National Natural Science Foundation of China under Contacts 11131006, 41390450 and 91330204, in part by EU FP7-IRSES Project LIVCODE under Grant 295151, and in part by the National Basic Research Program of China under Contact 2013CB329404. We would like to thank Evaggelia Tsiligianni for providing the code implementing the preconditioning in his or her own work, and patiently discussing the details about the work. Special thanks are due to Karin Schnass for instructive discussions on her work. We would also like to thank the referees for numerous suggestions which helped to clarify the exposition and argumentation.
Author information
Authors and Affiliations
Corresponding author
Appendix: Some Necessary Results for the Proofs
Appendix: Some Necessary Results for the Proofs
For derivations in Sect. 3, we need several results summarized in the theorems below.
Theorem 6
(Theorem 3.4 in [11]) Let \(\mathbf {A}\in {\mathbb {R}}^{m\times m}\) be a symmetric positive definite matrix with eigenvalues \(\lambda _{1}\ge \cdots \ge \lambda _{m}>0\). For any two nonzero independent vectors \(\mathbf {x},\mathbf {y}\in {\mathbb {R}}^{m}\), \(\theta \in (0,\pi /2]\) is the angle between the lines corresponding to \(\mathbf {x}\) and \(\mathbf {y}\) such that \(\Vert \mathbf {x}\Vert _{2}\Vert \mathbf {y}\Vert _{2}\cos \,\theta =\left| \mathbf {x}^{\mathrm {T}}\mathbf {y}\right| \). Then,
where \(\chi \doteq (\lambda _{1}/\lambda _{m}-1)/(\lambda _{1}/\lambda _{m}+1)\).
Theorem 7
(The concentration of the singular values of a Gaussian random matrix in [7]) Suppose that \(\mathbf {\Phi }\in {\mathbb {R}}^{m\times n}\), \(m<n\), is a Gaussian random matrix whose entries \(\varphi _{ij}\buildrel {\textit{i.i.d.}}\over {\sim }N(0,m^{-1})\), the singular values of the matrix \(\mathbf {\Phi }\) satisfy
Theorem 8
(Appendix 1 in [5]) Suppose that \(\mathbf {\Phi }\) is an \(m\times n\) Gaussian measurement matrix, for any \(0<\epsilon <1\), and a constant \(a\in (0,1)\),
Theorem 9
(Appendix D in [19]) Given a centralized \(\chi ^{2}\)-variable Z with m degrees of freedom, then \(\forall \,t>0\), there are
Rights and permissions
About this article
Cite this article
Chen, Y., Peng, J. & Yue, S. Preconditioning for Orthogonal Matching Pursuit with Noisy and Random Measurements: The Gaussian Case. Circuits Syst Signal Process 37, 4109–4127 (2018). https://doi.org/10.1007/s00034-017-0730-3
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00034-017-0730-3