Skip to main content
Log in

Preprocessed Compressive Adaptive Sense and Search Algorithm

  • Short Paper
  • Published:
Circuits, Systems, and Signal Processing Aims and scope Submit manuscript

Abstract

To address the performance degradation due to the cancellation of positive and negative entries of adaptive compressed sensing algorithms, we propose a simple adaptive sensing and group testing algorithm for sparse signals. The algorithm, termed “preprocessed compressive adaptive sense and search”, divides the input signal into two equal-length subsignals that include only non-positive or non-negative entries through a nonlinear preprocessing process and subsequently generates adaptive sensing and group testing. The proposed algorithm is computationally less intensive than non-adaptive compressed sensing and requires only klog(n/k) measurements to recover a k-sparse signal of dimension n. A theoretical guarantee for signal recovery is provided, and the provided numerical examples demonstrate a better recovery performance than non-adaptive sensing and compressive adaptive sense and search algorithms for the same signal-to-noise ratio.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. E. Arias-Castro, Y.C. Eldar, Noise folding in compressed sensing. IEEE Signal Process. Lett. 18(6), 478–481 (2011)

    Article  Google Scholar 

  2. E. Arias-Castro, E. Candes, M. Davenport, On the fundamental limits of adaptive sensing. IEEE Trans. Inf. Theory 59(1), 472–481 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  3. E. Bashan, G. Newstadt, A.O. Hero, Two-stage multiscale search for sparse targets. IEEE Trans. Signal Process. 59(5), 2331–2341 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  4. G. Braun, S. Pokutta, Y. Xie, Info-Greedy sequential adaptive compressed sensing. IEEE J. Sel. Topics Signal Process. 9(4), 601–611 (2015)

    Article  Google Scholar 

  5. E.J. Candès, T. Tao, Near-optimal signal recovery from random projections. Universal encoding strategies. IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  6. M.A. Davenport, E. Arias-Castro, Compressive binary search, pp. 1827–1831 (2012)

  7. M. Davenport, A. Massimino, D. Needell, T. Woolf, Constrained adaptive sensing. IEEE Trans. Signal Process. 64(10), 5437–5449 (2016)

    Article  MathSciNet  Google Scholar 

  8. D. Donoho, Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  9. Y.C. Eldar, G. Kutyniok, Compressed Sensing: Theory and Applications (Cambridge University Press, Cambridge, 2012)

    Book  Google Scholar 

  10. J. Haupt, R. Baraniuk, R. Castro, R. Nowak, Sequentially designed compressed sensing. In: Proceedings IEEE/SP Workshop Statistics Signal Process, pp. 401–404 (2012)

  11. S. Ji, Y. Xue, L. Carin, Bayesian compressive sensing. IEEE Trans. Signal Process. 56(6), 2346–2356 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  12. D. Malioutov, S. Sanghavi, A. Willsky, Sequential compressed sensing. IEEE J. Sel. Topics Signal Process. 4(2), 435–444 (2010)

    Article  Google Scholar 

  13. M.L. Malloy, R. Nowak, Near-optimal adaptive compressed sensing. IEEE Trans. Inf. Theory 60(7), 4001–4012 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  14. A. Tajer, R.M. Castro, X. Wang, Adaptive sensing of congested spectrum bands. IEEE Trans. Inf. Theory 58(9), 6110–6125 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  15. S.J. Wright, R.D. Nowak, M.A. Figueiredo, Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 57(7), 2479–2493 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  16. S. Zehetmayer, P. Bauer, M. Posch, Optimized multi-stage designs controlling the false discovery or the family-wise error rate. Stat. Med. 27, 4145–4160 (2008)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yun Lin.

Appendix

Appendix

Proof of Theorem 1

If PCASS with \( \alpha = 1 \) is used for signal recovery and the initial partitions \( l_{0} = 4k \) are set, twice \( l_{0} \) measurements are required in the first step, and \( 2k \) measurements are required in the remaining \( s_{0} - 1 \) step. Therefore, the total measurements are \( m = l_{0} + l_{0} + \left( {s_{0} - 1} \right) \cdot 2k = 4k + 2k\log \left( {n/k} \right) \).

The total sensing energy constraint is satisfied:

$$ \begin{aligned} \left\| A \right\|_{{{\text{fro}}}}^{2} & = l_{0} \frac{n}{{l_{0} }}\frac{M}{\gamma n} + l_{0} \frac{n}{{l_{0} }}\frac{M}{\gamma n} + \mathop \sum \limits_{s = 2}^{{s_{0} }} \frac{2kn}{{l_{0} 2^{s - 1} }}\frac{Ms}{\gamma n} \\ & = \frac{2M}{\gamma } + \frac{4kM}{{\gamma l_{0} }}\mathop \sum \limits_{s = 2}^{{s_{0} }} s2^{ - s} \\ & = \frac{2M}{\gamma }\left( {1 + \frac{2k}{{l_{0} }}\mathop \sum \limits_{s = 2}^{{s_{0} }} s2^{ - s} } \right) \\ \end{aligned} $$
(12)

where the amplitude of the sensing vector \( \sqrt {\frac{Ms}{\gamma n}} \) is set. If we set \( \gamma = 1 + 2k/l_{0} \mathop \sum \limits_{s = 2}^{{s_{0} }} s2^{ - s} \), then \( \left\| A \right\|_{{{\text{fro}}}}^{2} = 2M \).

A sufficient condition for the exact recovery of the support set of a signal is that the procedure will not incorrectly eliminate a measurement, including the nonzero entries on any of the \( {\text{s}}_{0} \) steps. Let \( \left| {y_{s,1} } \right|,\left| {y_{s,2} } \right|, \ldots ,\left| {y_{{s,t_{s} }} } \right| \) be the measurements corresponding to the signal subintervals containing one or more nonzero entries, and \( \left| {y_{{s,t_{s} + 1}} } \right|,\left| {y_{{s,t_{s} + 2}} } \right|, \ldots \) be the measurements corresponding to the zero entries at step \( {\text{s}} \). Consider a series of thresholds denoted \( \tau_{s} \), \( {\text{s}} = 1,2, \ldots s_{0} \). If \( \left| {y_{s,1} } \right|,\left| {y_{s,2} } \right|, \ldots ,\left| {y_{{s,t_{s} }} } \right| \) are all greater than \( \tau_{s} \), and \( \left| {y_{{s,t_{s} + 1}} } \right|,\left| {y_{{s,t_{s} + 2}} } \right|, \ldots \) are all less than \( \tau_{s} \), the sufficient condition is obtained for all \( {\text{s}} \). The additive Gaussian noise has a zero mean and \( \sigma^{2} \) variance; therefore, the measurements \( y_{s,1} ,y_{s,2} , \ldots ,y_{{s,t_{s} }} \) are normally distributed with a mean greater than \( \left| x \right|_{ \hbox{min} } a_{s} \) and a variance \( \sigma^{2} \). Meanwhile, \( y_{{s,t_{s} + 1}} ,y_{{s,t_{s} + 2}} , \ldots \) are normally distributed with a zero mean and \( \sigma^{2} \) variance.

According to the previous analysis and the error metrics defined in Sect. 3:

$$ \begin{aligned} P\left( {\hat{S} \ne {\text{S}}} \right) & \le \mathop \sum \limits_{j = 1}^{{t_{1} }} P\left( {\left| {y_{1,j} } \right| \le \frac{{\left| x \right|_{{\hbox{min} a_{1} }} }}{2}} \right) + \mathop \sum \limits_{{j = t_{1} + 1}}^{{l_{0} }} P\left( {\left| {y_{1,j} } \right| \ge \frac{{\left| x \right|_{{\hbox{min} a_{1} }} }}{2}} \right) \\ & \quad + \mathop \sum \limits_{s = 2}^{{s_{0} }} \left( {\mathop \sum \limits_{j = 1}^{{t_{s} }} P\left( {\left| {y_{s,j} } \right| \le \frac{{\left| x \right|_{{\hbox{min} a_{s} }} }}{2}} \right) + \mathop \sum \limits_{{j = t_{s} + 1}}^{2k} P\left( {\left| {y_{s,j} } \right| \ge \frac{{\left| x \right|_{{\hbox{min} a_{s} }} }}{2}} \right)} \right) \\ & \le \left( {4k - \frac{1}{2}} \right)\exp \left( { - \frac{{\left| x \right|_{\hbox{min} }^{2} a_{1}^{2} }}{{8\sigma^{2} }}} \right) + \mathop \sum \limits_{s = 2}^{{s_{0} }} \left( {2k - \frac{1}{2}} \right)\exp \left( { - \frac{{\left| x \right|_{\hbox{min} }^{2} a_{s}^{2} }}{{8\sigma^{2} }}} \right) \\ \end{aligned} $$
(13)
$$ \le \mathop \sum \limits_{s = 1}^{{s_{0} }} \exp \left( { - \frac{{\left| x \right|_{\hbox{min} }^{2} Ms}}{{8\gamma n\sigma^{2} }} + \log \left( {4k - \frac{1}{2}} \right)} \right) \le \mathop \sum \limits_{s = 1}^{{s_{0} }} \exp \left( { - \frac{{\left| x \right|_{\hbox{min} }^{2} Ms}}{{14\sigma^{2} }} + \log \left( {4k - \frac{1}{2}} \right)} \right)\\ $$
(14)

We set \( \tau_{s} = \left| x \right|_{ \hbox{min} } a_{s} /2 \), where \( a_{s} \) is the amplitude of the nonzero entries of the sensing vector on step s. The derivations of the probability of error are detailed in the page above. Inequality (13) utilizes the tail bound of the standard normal distribution: \( 1 - F_{N} \left( x \right) \le \frac{1}{2}{ \exp }\left( { - \frac{{x^{2} }}{2}} \right) \) for \( x \ge 0 \), where \( F_{N} \left( \cdot \right) \) is the Gaussian cumulative density function. The inequality

$$ \gamma = 1 + \frac{2k}{{l_{0} }}\mathop \sum \limits_{s = 2}^{{s_{0} }} s2^{ - s} \le \frac{7}{4} $$
(15)

is used in (14). Setting

$$ \left| x \right|_{ \hbox{min} } \ge \sigma \sqrt {14\frac{n}{M}\log \left( {\frac{8k - 1}{\delta }} \right)} $$
(16)

in (14) yields \( P\left( {\hat{S} \ne S} \right) \le \mathop \sum \limits_{s = 1}^{{s_{0} }} \left( {2/\delta } \right)^{ - s} \le \delta \), and Proof of Theorem 1 is completed.

Proof of Theorem 2

Similar to Proof of Theorem 1, a sufficient condition for the exact recovery of the support set of the signal is that the procedure will not incorrectly eliminate a measurement, including the nonzero entries on any of the \( {\text{s}}_{0} \) steps. \( \tau_{s} \), \( {\text{s}} = 1,2, \ldots s_{0} \) denotes a series of thresholds, and

$$ \gamma = 1 + \frac{2\alpha k}{{l_{0} }}\mathop \sum \limits_{s = 2}^{{s_{0} }} s2^{ - s} \le 1 + \frac{3}{4}\alpha $$
(17)
$$ m = 4k\left( {2 - \alpha } \right) + 2\alpha k\log \left( {n/k} \right) $$
(18)

The total sensing energy constraint is

$$ \left\| A \right\|_{{{\text{fro}}}}^{2} = \frac{2M}{\gamma }\left( {1 + \frac{2\alpha k}{{l_{0} }}\mathop \sum \limits_{s = 2}^{{s_{0} }} s2^{ - s} } \right) $$
(19)

where the amplitude of the sensing vector \( \sqrt {\frac{Ms}{\gamma n}} \) is set. If we set \( \gamma = 1 + 2\alpha k/l_{0} \mathop \sum \limits_{s = 2}^{{s_{0} }} s2^{ - s} \), then \( \left\| A \right\|_{{{\text{fro}}}}^{2} = 2M \).

According to the previous analysis and the error metrics defined in Sect. 3:

$$ \begin{aligned} P\left( {\hat{S} \ne {\text{S}}} \right) & \le P\left( {\mathop {\bigcup }\limits_{j = 1}^{{t_{1} }} \left\{ {\left| {y_{1,j} } \right| \le \tau_{1} } \right\} \cup \mathop {\bigcup }\limits_{{j = t_{1} }}^{{l_{0} }} \left\{ {\left| {y_{1,j} } \right| \ge \tau_{1} } \right\}} \right) \\ & \quad + \mathop \sum \limits_{s = 2}^{{s_{0} }} P\left( {\mathop {\bigcup }\limits_{j = 1}^{{t_{s} }} \left\{ {\left| {y_{s,j} } \right| \le \tau_{s} } \right\} \cup \mathop {\bigcup }\limits_{{j = t_{s} + 1}}^{2k} \left\{ {\left| {y_{s,j} } \right| \ge \tau_{s} } \right\}} \right) \\ \end{aligned} $$
(20)

The following steps are the same as Proof of Theorem 1; therefore, they are omitted due to space limitations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lin, Y., Hu, Q. Preprocessed Compressive Adaptive Sense and Search Algorithm. Circuits Syst Signal Process 38, 918–929 (2019). https://doi.org/10.1007/s00034-018-0894-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00034-018-0894-5

Keywords

Navigation