Skip to main content
Log in

Sparse support recovery using correlation information in the presence of additive noise

  • Published:
Multidimensional Systems and Signal Processing Aims and scope Submit manuscript

Abstract

The correlation based framework has recently been proposed for sparse support recovery in noiseless case. To solve this framework, the constrained least absolute shrinkage and selection operator (LASSO) was employed. The regularization parameter in the constrained LASSO was found to be a key to the recovery. This paper will discuss the sparse support recoverability via the framework and adjustment of the regularization parameter in noisy case. The main contribution is to provide noise-related conditions to guarantee the sparse support recovery. It is pointed out that the candidates of the regularization parameter taken from the noise-related region can achieve the optimization and the effect of the noise cannot be ignored. When the number of the samples is finite, the sparse support recoverability is further discussed by estimating the recovery probability for the fixed regularization parameter in the region. The asymptotic consistency is obtained in probabilistic sense when the number of the samples tends to infinity. Simulations are given to demonstrate the validity of our results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Ariananda, D., & Leus, G. (2012). Compressive wideband power spectrum estimation. IEEE Transactions on Signal Processing, 60, 4775–4789.

    Article  MathSciNet  Google Scholar 

  • Baraniuk, R. (2007). Compressive sensing [lecture notes]. IEEE Signal Processing Magazine, 24, 118–121.

    Article  Google Scholar 

  • Ben-Haim, Z., Eldar, Y., & Elad, M. (2010). Coherence-based performance guarantees for estimating a sparse vector under random noise. IEEE Transactions on Signal Processing, 58, 5030–5043.

    Article  MathSciNet  Google Scholar 

  • Boyd, S., & Vandenberghe, L. (2004). Convex optimization. Cambridge: Cambridge University Press.

    Book  MATH  Google Scholar 

  • Candes, E., & Wakin, M. (2008). An introduction to compressive sampling. IEEE Signal Processing Magazine, 25, 21–30.

    Article  Google Scholar 

  • Chen, J., & Huo, X. (2006). Theoretical results on sparse representations of multiple-measurement vectors. IEEE Transactions on Signal Processing, 54, 4634–4643.

    Article  Google Scholar 

  • Cotter, S., Rao, B., Engan, K., & Kreutz-Delgado, K. (2005). Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Transactions on Signal Processing, 53, 2477–2488.

    Article  MathSciNet  Google Scholar 

  • Davies, M., & Eldar, Y. (2012). Rank awareness in joint sparse recovery. IEEE Transactions on Information Theory, 58, 1135–1146.

    Article  MathSciNet  MATH  Google Scholar 

  • Donoho, D., & Tanner, J. (2005). Sparse nonnegative solution of underdetermined linear equations by linear programming. Proceedings of the National Academy of Sciences, 102, 9446–9451.

    Article  MathSciNet  MATH  Google Scholar 

  • Donoho, D., Elad, M., & Temlyakov, V. (2006). Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Transactions on Information Theory, 52, 6–18.

    Article  MathSciNet  MATH  Google Scholar 

  • Eldar, Y., & Mishali, M. (2009). Robust recovery of signals from a structured union of subspaces. IEEE Transactions on Information Theory, 55, 5302–5316.

    Article  MathSciNet  MATH  Google Scholar 

  • Fuchs, J. (2005). Recovery of exact sparse representations in the presence of bounded noise. IEEE Transactions on Information Theory, 51, 3601–3608.

    Article  MathSciNet  MATH  Google Scholar 

  • Grant, M., & Boyd, S. (2014). CVX: Matlab software for disciplined convex programming, version 2.1. http://cvxr.com/cvx.

  • Haupt, J., Bajwa, W., Raz, G., & Nowak, R. (2010). Toeplitz compressed sensing matrices with applications to sparse channel estimation. IEEE Transactions on Information Theory, 56, 5862–5875.

    Article  MathSciNet  MATH  Google Scholar 

  • Jin, Y., & Rao, B. (2013). Support recovery of sparse signals in the presence of multiple measurement vectors. IEEE Transactions on Information Theory, 59, 3139–3157.

    Article  MathSciNet  MATH  Google Scholar 

  • Kim, J. M., Lee, O. K., & Ye, J. C. (2012). Compressive music: Revisiting the link between compressive sensing and array signal processing. IEEE Transactions on Information Theory, 58, 278–301.

    Article  MathSciNet  MATH  Google Scholar 

  • Lustig, M., Donoho, D., Santos, J., & Pauly, J. (2008). Compressed sensing MRI. IEEE Signal Processing Magazine, 25, 72–82.

    Article  Google Scholar 

  • Pal, P., & Vaidyanathan, P. (2010). Nested arrays: A novel approach to array processing with enhanced degrees of freedom. IEEE Transactions on Signal Processing, 58, 4167–4181.

    Article  MathSciNet  Google Scholar 

  • Pal, P., & Vaidyanathan, P. (2015). Pushing the limits of sparse support recovery using correlation information. IEEE Transactions on Signal Processing, 63, 711–726.

    Article  MathSciNet  Google Scholar 

  • Slawski, M., & Hein, M. (2012). Non-negative least squares for high-dimensional linear models: Consistency and sparse recovery without regularization. arXiv:1205.0953.

  • Tan, Z., Eldar, Y., & Nehorai, A. (2014). Direction of arrival estimation using co-prime arrays: A super resolution viewpoint. IEEE Transactions on Signal Processing, 62, 5565–5576.

    Article  MathSciNet  Google Scholar 

  • Tang, G., & Nehorai, A. (2010). Performance analysis for sparse support recovery. IEEE Transactions on Information Theory, 56, 1383–1399.

    Article  MathSciNet  MATH  Google Scholar 

  • Tibshirani, R. (2011). Regression shrinkage and selection via the lasso: A retrospective. Journal of the Royal Statistical Society Series B Statistical Methodology, 73, 273–282.

    Article  MathSciNet  Google Scholar 

  • Tulino, A., Caire, G., Verdu, S., & Shamai, S. (2013). Support recovery with sparsely sampled free random matrices. IEEE Transactions on Information Theory, 59, 4243–4271.

    Article  Google Scholar 

  • Vaidyanathan, P., & Pal, P. (2011). Sparse sensing with co-prime samplers and arrays. IEEE Transactions on Signal Processing, 59, 573–586.

    Article  MathSciNet  Google Scholar 

  • Wipf, D., & Rao, B. (2007). An empirical bayesian strategy for solving the simultaneous sparse approximation problem. IEEE Transactions on Signal Processing, 55, 3704–3716.

    Article  MathSciNet  Google Scholar 

  • Zhao, P., & Yu, B. (2006). On model selection consistency of Lasso. Journal of Machine Learning Research, 7, 2541–2563.

    MathSciNet  MATH  Google Scholar 

  • Zheng, J., & Kaveh, M. (2013). Sparse spatial spectral estimation: A covariance fitting algorithm, performance and regularization. IEEE Transactions on Signal Processing, 61, 2767–2777.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

This work is supported by NSFC (Grant No. 61471174) and Guangzhou Science Research Project (Grant No. 2014J4100247).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuli Fu.

Appendices

Appendix 1: Proof of Theorem 1

In our proof, following lemma will be used.

Lemma 1

(Pal and Vaidyanathan 2015, Lemma 4) The mutual coherence of matrix \(\mathbf {A}\) and the mutual coherence of its Khatri-Rao product matrix \(\mathbf {B}\) have the relation as follows

$$\begin{aligned} \mu _{\mathbf {B}} = \mu _{\mathbf {A}}^2. \end{aligned}$$

Proof (Proof of Theorem 1)

By the convex optimization theory (Boyd and Vandenberghe 2004), the Lagrangian function of (8) is

$$\begin{aligned} L[\mathbf {r}(\tau ),\mathbf {\lambda }(\tau )] = \frac{1}{2}\Vert \mathbf {z} - \mathbf {Br}(\tau )\Vert _{2}^2 + \tau {\mathbf {1}_{N}^T}\mathbf {r}(\tau ) - \mathbf {\lambda }(\tau )^T\mathbf {r}(\tau ), \end{aligned}$$

where \(\mathbf {\lambda }(\tau )\in {\mathbb {R}^{N }}\) is the Lagrangian multiplier related to the parameter \(\tau \). The Karush-Kuhn-Tucker (KKT) conditions are given by

where \(i=1,\cdots ,N\). From KKT conditions, it is easy to verify that conditions \({\mathbf {r}_{{{{\varLambda }'} ^c}}}(\tau ) = \mathbf {0}\) and \({\mathbf {r}_{{\varLambda }'} }(\tau )\succ \mathbf {0}\) are sufficient to let \(\mathrm {supp}[\mathbf {r}(\tau )] = {\varLambda }'\) hold. If \({\varLambda }' \subseteq {{\varLambda }}\), partition the true support into \({{\varLambda }} = \{{\varLambda }' ,{{\varLambda }}\backslash {\varLambda }' \}\) and the nonzero part of the true sparse signal into \({\hat{\mathbf {r}}_{{{\varLambda }}}} = {[{{\hat{\mathbf {r}}^T_{{\varLambda }'} }},{{\hat{\mathbf {r}}^T_{{{\varLambda }}\backslash {{\varLambda }'} }}}]^T}\). (13) can be reformulated as

$$\begin{aligned} \mathbf {z} = \mathbf {B}_{{\varLambda }'} \hat{\mathbf {r}}_{{\varLambda }'} + \mathbf {h}, \end{aligned}$$
(16)

where \(\mathbf {h}={\mathbf {B}_{{{\varLambda }}\backslash {{\varLambda }'} }}{\hat{\mathbf {r}}_{{{\varLambda }}\backslash {{\varLambda }'} } } + \mathrm {vec}(\sigma _\varepsilon ^2{\mathbf {I}_M} + \mathbf {H} )\). From KKT conditions, the optimum solution should satisfy

$$\begin{aligned} \mathbf {B}_{{\varLambda }'} ^T{\mathbf {B}_{{\varLambda }'} }{\mathbf {r}_{{\varLambda }'} }(\tau ) - \mathbf {B}_{{\varLambda }'} ^T\mathbf {z} + \tau {\mathbf {1}_{\left| {{\varLambda }'} \right| }} = \mathbf {0}, \end{aligned}$$
(17)

and

$$\begin{aligned} \mathbf {b}_j^T\mathbf {B}_{{\varLambda }'}\mathbf {r}_{{\varLambda }'}(\tau ) - \mathbf {b}_j^T\mathbf {z} + \tau - {{\lambda } _j}(\tau ) = 0 \quad \forall j \in {{{\varLambda }'}^c}. \end{aligned}$$
(18)

If \(| {{\varLambda }' }| < 1/2+1/2\mu _{\mathbf {B}}\), the pseudo-inverse \(\mathbf {B}_{{\varLambda }' } ^ + = {(\mathbf {B}_{{\varLambda }' }^T{\mathbf {B}_{{\varLambda }' }})^{-1}}\mathbf {B}_{{\varLambda }' }^T\) exists (Fuchs 2005). From (17), it follows that

$$\begin{aligned} {\mathbf {r}_{{\varLambda }'}}\left( \tau \right) = \mathbf {B}_{{\varLambda }'} ^ + \mathbf {z} - \tau {\left( {\mathbf {B}_{{\varLambda }'} ^T{\mathbf {B}_{{\varLambda }'} }} \right) ^{ - 1}}{\mathbf {1}_{\left| {{\varLambda }'} \right| }}. \end{aligned}$$
(19)

Substituting (16) into (19), we have

$$\begin{aligned} {\mathbf {r}_{{\varLambda }'} }\left( \tau \right) = {\hat{\mathbf {r}}_{{\varLambda }'} } + \mathbf {B}_{{\varLambda }'} ^ + \mathbf {h} - \tau {\left( {\mathbf {B}_{{\varLambda }'} ^T{\mathbf {B}_{{\varLambda }'} }} \right) ^{ - 1}}{\mathbf {1}_{\left| {{\varLambda }'} \right| }}. \end{aligned}$$
(20)

Substituting (16) into (18), we obtain

$$\begin{aligned} \begin{aligned} {{\lambda }_j}(\tau )= \tau - \mathbf {b}_j^T({\mathbf {I}_{\left| {{\varLambda }'} \right| }} - {\mathbf {B}_{{\varLambda }'} }\mathbf {B}_{{\varLambda }'} ^ + )\mathbf {h} -\tau \mathbf {b}_j^T{\mathbf {B}_{{\varLambda }'} }{\left( {\mathbf {B}_{{\varLambda }'} ^T{\mathbf {B}_{{\varLambda }'} }} \right) ^{ - 1}}{\mathbf {1}_{\left| {{\varLambda }'} \right| }} \quad \forall j \in {{{\varLambda }'} ^c}. \end{aligned} \end{aligned}$$
(21)

Denote that

$$\begin{aligned} {\mathbf {h}^ \bot } = (\mathbf {I}_{\left| {{\varLambda }'} \right| } - {\mathbf {B}_{{\varLambda }'} }\mathbf {B}_{{\varLambda }'}^ + ) \mathbf {h}. \end{aligned}$$

Then, (21) can be rewritten as

$$\begin{aligned} {{\lambda } _j}(\tau ) = \tau - \mathbf {b}_j^T{\mathbf {h}^ \bot } - \tau \mathbf {b}_j^T{(\mathbf {B}_{{\varLambda }'} ^ + )^T}{\mathbf {1}_{\left| {{\varLambda }'} \right| }} \quad \forall j \in {{{\varLambda }'} ^c}. \end{aligned}$$
(22)

Now, from (20) and (22), the conditions

$$\begin{aligned} \min \left( \hat{{r}}_{{\varLambda }'}\right) > {\left\| {\mathbf {B}_{{\varLambda }'}^{+} \mathbf {h} - \tau {{\left( {\mathbf {B}_{{\varLambda }'} ^T{\mathbf {B}_{{\varLambda }'} }} \right) }^{ - 1}}{\mathbf {1}_{\left| {{\varLambda }'} \right| }}} \right\| _\infty }, \end{aligned}$$

and

$$\begin{aligned} \tau > \left| {{\mathbf {b}^{T}_j}{{\mathbf {h}}^ \bot } + \tau \mathbf {b}_j^T{{(\mathbf {B}_{{\varLambda }'}^ {+} )}^T}{\mathbf {1}_{\left| {{\varLambda }'} \right| }}} \right| , \end{aligned}$$

imply that \({\mathbf {r}_{{\varLambda }'}}(\tau ) \succ \mathbf {0}\;\) and \({\mathbf {r}_{{{{\varLambda }'} ^c}}}(\tau ) = \mathbf {0}\;\), respectively. Using the similar techniques in the proof of Theorem 4 in Fuchs (2005) and Lemma 1, we have the following conditions

$$\begin{aligned} \tau > \frac{{1\mathrm{{ + }}\mu _{\mathbf {A}}^2 - \mu _{\mathbf {A}}^2\left| {{\varLambda }'} \right| }}{{1\mathrm{{ + }}\mu _{\mathbf {A}}^2 - 2\mu _{\mathbf {A}}^2\left| {{\varLambda }'} \right| }} {\left\| {\mathbf {h}} \right\| _2}=\frac{\eta '}{\xi '}{\left\| {\mathbf {h}} \right\| _2}, \end{aligned}$$

and

$$\begin{aligned} \hat{r}_{\min }> \frac{{{{\left\| {\mathbf {h}} \right\| }_2}\mathrm{{ + }}\tau }}{{1\mathrm{{ + }}\mu _{\mathbf {A}}^2 - \mu _{\mathbf {A}}^2\left| {{\varLambda }'} \right| }}=\frac{{{{\left\| {\mathbf {h}} \right\| }_2}\mathrm{{ + }}\tau }}{\eta '}, \end{aligned}$$

that can guarantee \(\mathrm {supp}[\mathbf {r}(\tau )] \subseteq {{\varLambda }}\). This completes the first part of the proof.

Next, to prove the second part of this theorem, it only needs to verify that (15) can guarantee \(\mathrm {supp}[\mathbf {r}(\tau )] = {\varLambda }\), if the regularization parameter

$$\begin{aligned} \tau = M\sigma _\varepsilon ^2 + \upsilon \end{aligned}$$
(23)

is used. From (20) and (22), one directly has

$$\begin{aligned} {\mathbf {r}_{{{\varLambda }}}}\left( \tau \right) = {\hat{\mathbf {r}}_{{{\varLambda }}}} + \mathbf {B}_{{{\varLambda }}}^ + \mathbf {h} - \tau {\left( {\mathbf {B}_{{{\varLambda }}}^T{\mathbf {B}_{{{\varLambda }}}}} \right) ^{ - 1}}{\mathbf {1}_{\left| {{{\varLambda }}} \right| }}, \end{aligned}$$
(24)
$$\begin{aligned} {{\lambda }_j}(\tau ) = \tau - {\mathbf {b}^{T}_j}{\mathbf {h}^ \bot } - \tau \mathbf {b}_j^T(\mathbf {B}_{\varLambda }^ +)^T {\mathbf {1}_{\left| {{{\varLambda }}} \right| }} \;\; \forall j \in {\varLambda }^c, \end{aligned}$$
(25)

and \(\mathbf {h} = \mathrm {vec}(\sigma _\varepsilon ^2{\mathbf {I}_M} + \mathbf {H})\). Next, we observe that if atom \(\mathbf {a}\) is normalized, i.e.

$$\begin{aligned} \left\| {{\mathbf {a}}} \right\| _{2}^2 = \sum \limits _{j = 1}^M {{a}_{j}^2} = 1, \end{aligned}$$

then atom \(\mathbf {b}\) is also normalized, since

$$\begin{aligned} \Vert \mathbf {b}\Vert _2^2 = \left\| {{\mathbf {a}} \otimes {\mathbf {a}}} \right\| _2^2 = {\left( \sum \limits _{j = 1}^M {a_{j}^2} \right) ^2} = 1. \end{aligned}$$

Using this relationship, we have

$$\begin{aligned} \begin{aligned} \mathbf {b}_j^T\mathrm {vec}({\sigma _{\varepsilon } ^2}{\mathbf {I}_M}) = {\left[ {{\mathbf {a}_j} \otimes {\mathbf {a}_j}} \right] ^T}\mathrm {vec}({\sigma _{\varepsilon } ^2}{\mathbf {I}_M}) = \mathbf {a}_j^T(\sigma _\varepsilon ^2{\mathbf {I}_M}){\mathbf {a}_j} = M\sigma _\varepsilon ^2. \end{aligned} \end{aligned}$$
(26)

Substituting (26) into (24) and (25), one has

$$\begin{aligned} \begin{aligned} {\mathbf {r}_{{{\varLambda }}}}\left( \tau \right) = {\hat{\mathbf {r}}_{{{\varLambda }}}}&+ M\sigma _\varepsilon ^2{(\mathbf {B}_{\varLambda }^T{\mathbf {B}_{\varLambda }})^{ - 1}}{\mathbf {1}_{\left| {{{\varLambda }}} \right| }}+\mathbf {B}_{{{\varLambda }}}^ + \mathrm {vec}(\mathbf {H}) - \tau {\left( {\mathbf {B}_{{{\varLambda }}}^T{\mathbf {B}_{{{\varLambda }}}}} \right) ^{ - 1}}{\mathbf {1}_{\left| {{{\varLambda }}} \right| }}, \end{aligned} \end{aligned}$$
(27)

and

$$\begin{aligned} \begin{aligned} {{\lambda }_j}(\tau ) =\tau - M\sigma _\varepsilon ^2 + (M\sigma _\varepsilon ^2 - \tau )\mathbf {b}_j^T{(\mathbf {B}_{{\varLambda }}^ + )^T}{\mathbf {1}_{\left| {{{\varLambda }}} \right| }} -\mathbf {b}_j^T\mathrm {vec}{(\mathbf {H})^ \bot } \quad \forall j \in {\varLambda }^c. \end{aligned} \end{aligned}$$
(28)

Substituting (23) into the right hand of (27) and (28), one has the equalities as follows

$$\begin{aligned} \begin{aligned}&\mathbf {r}_{{{\varLambda }}}(\tau ) = \hat{\mathbf {r}}_{{{\varLambda }}} + \mathbf {B}_{{{\varLambda }}}^ + \mathrm {vec}(\mathbf {H}) - \upsilon {\left( {\mathbf {B}_{{{\varLambda }}}^T{\mathbf {B}_{{{\varLambda }}}}} \right) ^{ - 1}}{\mathbf {1}_{\left| {{{\varLambda }}} \right| }}, \end{aligned} \end{aligned}$$

and

Therefore, the conditions

$$\begin{aligned} \upsilon > \frac{{1\mathrm{{ + }}\mu _{\mathbf {A}}^2 - \mu _{\mathbf {A}}^2\left| {{{\varLambda }}} \right| }}{{1\mathrm{{ + }}\mu _{\mathbf {A}}^2 - 2\mu _{\mathbf {A}}^2\left| {{{\varLambda }}} \right| }}\;{\left\| {\mathrm {vec}(\mathbf {H})} \right\| _2}=\frac{\eta }{\xi }{\left\| {\mathrm {vec}(\mathbf {H})} \right\| _2}, \end{aligned}$$

and

$$\begin{aligned} \hat{r}_{\min }> \frac{{{{\left\| {\mathrm {vec}(\mathbf {H})} \right\| }_2}\mathrm{{ + }}\upsilon }}{{1\mathrm{{ + }}\mu _{\mathbf {A}}^2 - \mu _{\mathbf {A}}^2\left| {{{\varLambda }}} \right| }}= \frac{{{{\left\| {\mathrm {vec}(\mathbf {H})} \right\| }_2}\mathrm{{ + }}\upsilon }}{\eta }, \end{aligned}$$

imply that \(\mathrm {supp}[\mathbf {r}(\tau )] = {{\varLambda }}\). This completes the second part of the proof. \(\square \)

Appendix 2: Proof of Theorem 2

Before the proof of Theorem 2, a proposition is stated.

Proposition 1

If \(\Vert \mathrm {vec}({\mathbf {H}})\Vert _2 <{\xi }\upsilon /{\eta } \), then the probability of exact sparse support recovery by the constrained LASSO can be estimated as

$$\begin{aligned} { \begin{aligned} P_s(\tau )\ge \prod \limits _{i \in {\varLambda }}{\Pr \left( \hat{r}_{i}>\frac{\xi +\eta }{\eta ^2}\upsilon \right) } -\sum \limits _{m,n=1}^{M}\Pr \left( |H_{mn}| \ge \frac{\xi }{M\eta }\upsilon \right) . \end{aligned}} \end{aligned}$$

Proof

Let (15) be random event, one directly has

$$\begin{aligned} {P_s(\tau )\ge \Pr \left( \Vert \mathrm {vec}(\mathbf {H})\Vert _2 <\frac{\xi }{\eta }\upsilon ,\; \hat{r} _{\min }> \frac{\Vert \mathrm {vec}(\mathbf {H})\Vert _2+\upsilon }{\eta }\right) }. \end{aligned}$$

For this random event is a sufficient condition for the exact support recovery. If \(\Vert \mathrm {vec}(\mathbf {H})\Vert _2 <{\xi }\upsilon /{\eta } \), then

$$\begin{aligned} \hat{r}_{\min }>\frac{\frac{\xi }{\eta }\upsilon +\upsilon }{\eta } \Rightarrow \hat{r}_{\min } > \frac{\Vert \mathrm {vec}(\mathbf {H})\Vert _2+\upsilon }{\upsilon }. \end{aligned}$$

Using the similar techniques in the proof of Lemma 6 in Pal and Vaidyanathan (2015), it follows that

$$\begin{aligned} { \begin{aligned} P_s(\tau )&\ge \Pr \left( \Vert \mathrm {vec}(\mathbf {H})\Vert _2<\frac{\xi }{\eta }\upsilon ,\; \hat{r}_{\min }>\frac{\frac{\xi }{\eta }\upsilon +\upsilon }{\eta }\right) \\&\ge \prod \limits _{i \in {\varLambda }}{\Pr \left( \hat{r}_{i}>\frac{\xi +\eta }{\eta ^2}\upsilon \right) } -\sum \limits _{m,n = 1}^{M}\Pr \left( |H_{mn}| \ge \frac{\xi }{M\eta }\upsilon \right) . \end{aligned}} \end{aligned}$$
(29)

This completes the proof of proposition 1. \(\square \)

Proof (Proof of Theorem 2)

Since the distributions of signal and the additive noise are unknown, We will use Chebyshev Inequality to estimate the probability of \(\Pr \left[ \hat{r} _{i} >{(\xi +\eta )}\upsilon /{\eta ^2}\right] \) and \(\Pr (\left| {{{H}_{mn}}} \right| \ge {\xi }\upsilon /{M\eta })\).

The variance of \({H}_{mn}\) is

$$\begin{aligned} \begin{aligned} \mathbb {D} ({H}_{mn})&= \mathbb {E}(H^2_{mn} ) - {\mathbb {E}^2}({H}_{mn}) \\&= \mathbb {E}\left( \sum \limits _{p = 1}^4 {T_p^2} + 2\sum \limits _{p,q=1,p \ne q}^4 {{T_p}} {T_q}\right) \\&= \sum \limits _{p = 1}^4 {\mathbb {E}\left( T_p^2\right) } + 2\sum \limits _{p,q=1, p \ne q}^4 \mathbb {E}\left( {T_p} {T_q}\right) , \end{aligned} \end{aligned}$$

Hence, one has

$$\begin{aligned} \begin{aligned}&\mathbb {D}({H}_{mn})=\frac{1}{L}\left[ \sum _{i,j\in {\varLambda },i\ne j}( \sigma _i^2\sigma _j^2 {A}_{mi}^2{A}_{nj}^2+\sigma _i^2\sigma _j^2{{A}_{mi}}{{A}_{ni}}{{A}_{mj}}{{A}_{nj}} +2 A_{mi}^2\sigma _i^2 \sigma _\varepsilon ^2)+ \sigma _\varepsilon ^4\right] \;\;m \ne n, \\&\mathbb {D}({H}_{mn})=\frac{1}{L}\sum \limits _{i,j\in {\varLambda },i\ne j} \left( 2\sigma _i^2\sigma _j^2 {{A}_{mi}^2{A}_{mj}^2}+ 2A_{mi}^2\sigma _i^2 \sigma _\varepsilon ^2\right) \;\;\quad m = n. \end{aligned} \end{aligned}$$

By Chebyshev Inequality \(\Pr [\left| {x - \mathbb {E}(x)} \right| \ge t] \le {\mathbb {D}(x)}/{t^2}\), one obtains

$$\begin{aligned} { \begin{aligned}&\sum _{m,n = 1}^{M}\Pr (|{H}_{mn}|\ge \frac{\xi }{M\eta }\upsilon ) \le \frac{{\eta ^2M^2}}{\xi ^2\upsilon ^2} \sum _{m,n = 1}^{{M}} \mathbb {D}\left( {H}_{mn}\right) \\&\quad =\frac{\eta ^2M^2}{\xi ^2\upsilon ^2L}\left\{ \sum _{i,j\in {\varLambda },i\ne j} \sigma _i^2\sigma _j^2[1 + (\mathbf {a}^{T}_i \mathbf {a}_j)^2 ]+ \sum _{i\in {\varLambda }} {2\sigma _i^2\sigma _\varepsilon ^2} + ({M^2} - M)\sigma _\varepsilon ^4\right\} \\&\quad \le \frac{{\eta ^2M^2}}{\xi ^2\upsilon ^2L}\left[ \sum _{i,j\in {\varLambda },i\ne j} {\sigma _i^2\sigma _j^2(1 + \mu _{\mathbf {A}}^2)}+\sum _{i\in {\varLambda }} {2\sigma _i^2\sigma _\varepsilon ^2} + ({M^2} - M)\sigma _\varepsilon ^4\right] , \end{aligned}} \end{aligned}$$
(30)

and

$$\begin{aligned} \begin{aligned} \Pr \left( \hat{r}_{i} >\frac{\xi +\eta }{\eta ^2}\upsilon \right)&\ge \Pr \left( \left| {\hat{r}_{i} - \sigma _i^2} \right| < \sigma _i^2 - \frac{\xi +\eta }{\eta ^2}\upsilon \right) \\&\ge 1 - \frac{{\mathbb {D}\left( \hat{r}_{i} - \sigma _i^2\right) }}{{{{\left( \sigma _i^2 - \frac{\xi +\eta }{\eta ^2}\upsilon \right) }^2}}}\\&= 1 - \frac{{\kappa _i - \sigma _i^4}}{{L{{\left( \sigma _i^2 -\frac{\xi +\eta }{\eta ^2}\upsilon \right) }^2}}}. \end{aligned} \end{aligned}$$
(31)

When \(\sigma _i^2 \le (\xi +\eta )\upsilon /{\eta ^2}\) , the inequality in (31) is meaningless, hence \(\upsilon < {\eta ^2}\sigma _{\min }^2(\xi +\eta )\) is necessary. From (30), (31) and using Proposition 1, one obtains

$$\begin{aligned} \begin{aligned} {P_s(\tau )}&\ge \prod _{i \in {\varLambda }}\left[ 1 -\frac{{\kappa _i - \sigma _i^4}}{L(\sigma _i^2 - {\frac{\xi +\eta }{\eta ^2}\upsilon })^2}\right] \\&\,\quad -\frac{{{\eta ^2 M^2}}}{L\xi ^2\upsilon ^2 }\left[ \sum _{i,j \in {\varLambda }, i \ne j} {\sigma _i^2\sigma _j^2(1 + \mu _{\mathbf {A}}^2)}+ \sum \limits _{i \in {\varLambda }} {2\sigma _i^2\sigma _\varepsilon ^2} + ({M^2} - M)\sigma _\varepsilon ^4\right] , \end{aligned} \end{aligned}$$

which completes the proof of the theorem. \(\square \)

Appendix 3: Proof of Theorem 3

Before the proof of this theorem, three helpful lemmas are stated here.

Lemma 2

(Haupt et al. 2010, Lemma 6) Let \({x_i}\) and \({y_i}\), \(i = 1, \cdots ,K\) be sequences of i.i.d zero mean Gaussian random variables with variances equal to \(\sigma _x^2\) and \(\sigma _y^2\), respectively. Then

$$\begin{aligned} \Pr \left( \left| {\sum \limits _{i = 1}^K {{x_i}{y_i}} } \right| \ge t\right) \le 2\exp \left[ { - \frac{{{t^2}}}{{2{\sigma _x}{\sigma _y}\left( {2{\sigma _x}{\sigma _y}K + t} \right) }}} \right] . \end{aligned}$$

Lemma 3

(Tan et al. 2014, Lemma A.2) Let \(x_i, i=1,\cdots ,K\) be a sequence of i.i.d zero mean Gaussian random variables with variances equal to \(\sigma ^2\). Then

$$\begin{aligned} \Pr \left( \left| \sum \limits _{i=1}^{K}x_i^2-K\sigma ^2\right| \ge t\right) \le 2\exp \left( -\frac{t^2}{16\sigma ^4K}\right) , \end{aligned}$$

for \(0\le t \le 4\sigma ^2K\).

Lemma 4

(Pal and Vaidyanathan 2015, Lemma 9) Let \(x_i\), \(i = 1, \cdots ,K\) be a sequence of i.i.d zero mean Gaussian random variables with variances equal to \(\sigma _x^2\). Assume \(0< C < \sigma _x^2\), then there exists a constant \(0<s_0<1/2\) such that \(\Pr \left( \sum \nolimits _{i = 1}^K x_i^2/K > C \right) \ge 1 - {\beta ^{ - K}}\) where \(\beta =(1+2s_0)^{1/2} \exp \left( -Cs_0/\sigma _{x}^2\right) > 1\).

Proof (Proof of Theorem 3)

The proof is similar to Theorem 2. Firstly, we have

$$\begin{aligned}&{\begin{aligned} \quad \Pr \left( \left| H_{mn}\right| \le \frac{\xi }{M\eta }\upsilon \right)&= \Pr \left( | \sum \limits _{p = 1}^4T_{p} |\le \frac{\xi }{M\eta }\upsilon \right) \\&\ge \Pr \left( \bigcap _{p=1}^4|T_{p}|\le \frac{\xi }{4M\eta }\upsilon \right) \\&= 1-\Pr \left( \bigcup _{p=1}^4|T_{p}|\ge \frac{\xi }{4M\eta }\upsilon \right) \end{aligned}}\\&{\begin{aligned} \Longrightarrow \\ \Pr \left( \left| H_{mn}\right| \ge \frac{\xi }{4M\eta }\upsilon \right)&\le \Pr \left( \bigcup _{p=1}^4|T_{p}|\ge \frac{\xi }{4M\eta }\upsilon \right) \\&\le \sum \limits _{p = 1}^{4}\Pr \left( |T_{p}|\ge \frac{\xi }{4M\eta }\upsilon \right) . \end{aligned}} \end{aligned}$$

For probability \(\Pr \left( |T_{1}|\ge {\xi \upsilon }/{4M\eta }\right) \), we note that

$$\begin{aligned} \begin{aligned} |T_1|&=\frac{1}{L}\sum \limits _{l = 1}^L\sum \limits _{i,j \in {\varLambda },i \ne j} |A_{mi}x_i(l)A_{nj}x_j(l)| \\&\le \frac{1}{L}\sum \limits _{l = 1}^L \sum \limits _{i,j \in {\varLambda },i \ne j} \Vert \mathrm {vec}(\mathbf {A})\Vert _{\infty }^{2}|x_{ \max }^{(1)}(l)x_{\max }^{(2)}(l)|\\&\le \frac{1}{L}\sum \limits _{l = 1}^L |{\varLambda }|(|{\varLambda }|-1)\Vert \mathrm {vec}(\mathbf {A})\Vert _{\infty }^{2}|x_{\max }^{(1)}(l)x_{\max }^{(2)}(l)|, \end{aligned}\end{aligned}$$

where \(x_{\max }^{(1)}(l)\) and \(x_{\max }^{(2)}(l)\) are the first and the second largest ones of the set \(\{x_i(l)\}_{i\in {\varLambda }} \;\forall l\). This implies

$$\begin{aligned} \begin{aligned}&\Pr \left( |T_{1}|\ge \frac{\xi \upsilon }{4M\eta }\right) \\&\quad \le \Pr \left[ \sum \limits _{l = 1}^L |x_{\max }^{(1)}(l)x_{\max }^{(2)}(l)|\ge \frac{\xi \upsilon L}{4M\eta |{\varLambda }|(|{\varLambda }|-1)\Vert \mathrm {vec}(\mathbf {A})\Vert _{\infty }^{2}}\right] . \end{aligned} \end{aligned}$$

By Lemma 2, the upper bound of the probability \(\Pr \left( |T_{1}|\ge {\xi \upsilon }/{4M\eta }\right) \) can be estimated as

$$\begin{aligned} \Pr \left( {\left| {{T_1}} \right| \ge \frac{\xi \upsilon }{4M\eta }} \right) \; \le 2\exp \left[ { - LC_1(\delta _1,\upsilon )} \right] , \end{aligned}$$

where

$$\begin{aligned} C_1(\upsilon ,\delta _1)= \frac{\xi ^2\upsilon ^2}{\delta _1 (\delta _1+\xi \upsilon )}, \end{aligned}$$

and \(\delta _1=8M\eta \Vert \mathrm {vec}(\mathbf {A})\Vert _{\infty }^{2} |{\varLambda }|(|{\varLambda }| - 1)\sigma _{\max }^{(1)}\sigma _{\max }^{(2)}\). Similarly, we can obtain the following results

$$\begin{aligned} \begin{aligned}&\quad \quad \Pr \left( {\left| {{T_2}} \right| \ge \frac{\xi \upsilon }{4M\eta }} \right) \; \le 2\exp \left[ { - L{C_1}(\upsilon ,\delta _2)} \right] ,\\&\quad \quad \Pr \left( {\left| {{T_3}} \right| \ge \frac{\xi \upsilon }{4M\eta }} \right) \; \le 2\exp \left[ { - L{C_1}(\upsilon ,\delta _2)} \right] ,\\&\;\;\;\;\;\;\;\Pr \left( {\left| {{T_4}} \right| \ge \frac{\xi \upsilon }{4M\eta }} \right) \; \le 2\exp \left[ { - L{C_1}(\upsilon ,\delta _3)} \right] ,\;\; m \ne n, \end{aligned} \end{aligned}$$

where \( \delta _2=8 M\eta \Vert \mathrm {vec}(\mathbf {A})\Vert _{\infty }|{\varLambda }|\sigma _{\max }^{(1)}{\sigma _\varepsilon }\) and \(\delta _3 = 8M\eta \sigma _\varepsilon ^2\). From Lemma 3, it follows that

$$\begin{aligned} P\left( {\left| {{{T}_4}} \right| \ge \frac{\xi \upsilon }{4M\eta }} \right) \le 2\exp \left[ { - L{C_2}(\upsilon } \right) ] , \;\; m=n,\end{aligned}$$

where

$$\begin{aligned} {C_2}(\upsilon ) = \frac{{\xi ^2\upsilon ^2}}{{256\eta ^2M^2\sigma _\varepsilon ^4}},\;\;\; \upsilon \le \frac{16M\eta }{\xi }\sigma _{\varepsilon }^2 . \end{aligned}$$

Applying Proposition 1 and Lemma 4, the desired result is obtained as

$$\begin{aligned} P_s(\tau )\ge & {} \prod \limits _{i \in {\varLambda }}{(1 - {\beta _i}^{ - L})} - 2M^2\exp [- LC_1(\upsilon ,\delta _1 )] \\&- 4{M^2}\exp [-LC_1(\upsilon ,\delta _2)]-2(M^2 - M)\exp [-LC_1(\upsilon ,\delta _3)] \\&- 2M\exp [-LC_2(\upsilon )], \end{aligned}$$

which completes the proof of the theorem. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fu, Y., Hu, R., Xiang, Y. et al. Sparse support recovery using correlation information in the presence of additive noise. Multidim Syst Sign Process 28, 1443–1461 (2017). https://doi.org/10.1007/s11045-016-0420-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11045-016-0420-5

Keywords

Navigation