Abstract
To address the performance degradation due to the cancellation of positive and negative entries of adaptive compressed sensing algorithms, we propose a simple adaptive sensing and group testing algorithm for sparse signals. The algorithm, termed “preprocessed compressive adaptive sense and search”, divides the input signal into two equal-length subsignals that include only non-positive or non-negative entries through a nonlinear preprocessing process and subsequently generates adaptive sensing and group testing. The proposed algorithm is computationally less intensive than non-adaptive compressed sensing and requires only klog(n/k) measurements to recover a k-sparse signal of dimension n. A theoretical guarantee for signal recovery is provided, and the provided numerical examples demonstrate a better recovery performance than non-adaptive sensing and compressive adaptive sense and search algorithms for the same signal-to-noise ratio.
Similar content being viewed by others
References
E. Arias-Castro, Y.C. Eldar, Noise folding in compressed sensing. IEEE Signal Process. Lett. 18(6), 478–481 (2011)
E. Arias-Castro, E. Candes, M. Davenport, On the fundamental limits of adaptive sensing. IEEE Trans. Inf. Theory 59(1), 472–481 (2013)
E. Bashan, G. Newstadt, A.O. Hero, Two-stage multiscale search for sparse targets. IEEE Trans. Signal Process. 59(5), 2331–2341 (2011)
G. Braun, S. Pokutta, Y. Xie, Info-Greedy sequential adaptive compressed sensing. IEEE J. Sel. Topics Signal Process. 9(4), 601–611 (2015)
E.J. Candès, T. Tao, Near-optimal signal recovery from random projections. Universal encoding strategies. IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006)
M.A. Davenport, E. Arias-Castro, Compressive binary search, pp. 1827–1831 (2012)
M. Davenport, A. Massimino, D. Needell, T. Woolf, Constrained adaptive sensing. IEEE Trans. Signal Process. 64(10), 5437–5449 (2016)
D. Donoho, Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)
Y.C. Eldar, G. Kutyniok, Compressed Sensing: Theory and Applications (Cambridge University Press, Cambridge, 2012)
J. Haupt, R. Baraniuk, R. Castro, R. Nowak, Sequentially designed compressed sensing. In: Proceedings IEEE/SP Workshop Statistics Signal Process, pp. 401–404 (2012)
S. Ji, Y. Xue, L. Carin, Bayesian compressive sensing. IEEE Trans. Signal Process. 56(6), 2346–2356 (2008)
D. Malioutov, S. Sanghavi, A. Willsky, Sequential compressed sensing. IEEE J. Sel. Topics Signal Process. 4(2), 435–444 (2010)
M.L. Malloy, R. Nowak, Near-optimal adaptive compressed sensing. IEEE Trans. Inf. Theory 60(7), 4001–4012 (2014)
A. Tajer, R.M. Castro, X. Wang, Adaptive sensing of congested spectrum bands. IEEE Trans. Inf. Theory 58(9), 6110–6125 (2012)
S.J. Wright, R.D. Nowak, M.A. Figueiredo, Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 57(7), 2479–2493 (2009)
S. Zehetmayer, P. Bauer, M. Posch, Optimized multi-stage designs controlling the false discovery or the family-wise error rate. Stat. Med. 27, 4145–4160 (2008)
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Proof of Theorem 1
If PCASS with \( \alpha = 1 \) is used for signal recovery and the initial partitions \( l_{0} = 4k \) are set, twice \( l_{0} \) measurements are required in the first step, and \( 2k \) measurements are required in the remaining \( s_{0} - 1 \) step. Therefore, the total measurements are \( m = l_{0} + l_{0} + \left( {s_{0} - 1} \right) \cdot 2k = 4k + 2k\log \left( {n/k} \right) \).
The total sensing energy constraint is satisfied:
where the amplitude of the sensing vector \( \sqrt {\frac{Ms}{\gamma n}} \) is set. If we set \( \gamma = 1 + 2k/l_{0} \mathop \sum \limits_{s = 2}^{{s_{0} }} s2^{ - s} \), then \( \left\| A \right\|_{{{\text{fro}}}}^{2} = 2M \).
A sufficient condition for the exact recovery of the support set of a signal is that the procedure will not incorrectly eliminate a measurement, including the nonzero entries on any of the \( {\text{s}}_{0} \) steps. Let \( \left| {y_{s,1} } \right|,\left| {y_{s,2} } \right|, \ldots ,\left| {y_{{s,t_{s} }} } \right| \) be the measurements corresponding to the signal subintervals containing one or more nonzero entries, and \( \left| {y_{{s,t_{s} + 1}} } \right|,\left| {y_{{s,t_{s} + 2}} } \right|, \ldots \) be the measurements corresponding to the zero entries at step \( {\text{s}} \). Consider a series of thresholds denoted \( \tau_{s} \), \( {\text{s}} = 1,2, \ldots s_{0} \). If \( \left| {y_{s,1} } \right|,\left| {y_{s,2} } \right|, \ldots ,\left| {y_{{s,t_{s} }} } \right| \) are all greater than \( \tau_{s} \), and \( \left| {y_{{s,t_{s} + 1}} } \right|,\left| {y_{{s,t_{s} + 2}} } \right|, \ldots \) are all less than \( \tau_{s} \), the sufficient condition is obtained for all \( {\text{s}} \). The additive Gaussian noise has a zero mean and \( \sigma^{2} \) variance; therefore, the measurements \( y_{s,1} ,y_{s,2} , \ldots ,y_{{s,t_{s} }} \) are normally distributed with a mean greater than \( \left| x \right|_{ \hbox{min} } a_{s} \) and a variance \( \sigma^{2} \). Meanwhile, \( y_{{s,t_{s} + 1}} ,y_{{s,t_{s} + 2}} , \ldots \) are normally distributed with a zero mean and \( \sigma^{2} \) variance.
According to the previous analysis and the error metrics defined in Sect. 3:
We set \( \tau_{s} = \left| x \right|_{ \hbox{min} } a_{s} /2 \), where \( a_{s} \) is the amplitude of the nonzero entries of the sensing vector on step s. The derivations of the probability of error are detailed in the page above. Inequality (13) utilizes the tail bound of the standard normal distribution: \( 1 - F_{N} \left( x \right) \le \frac{1}{2}{ \exp }\left( { - \frac{{x^{2} }}{2}} \right) \) for \( x \ge 0 \), where \( F_{N} \left( \cdot \right) \) is the Gaussian cumulative density function. The inequality
is used in (14). Setting
in (14) yields \( P\left( {\hat{S} \ne S} \right) \le \mathop \sum \limits_{s = 1}^{{s_{0} }} \left( {2/\delta } \right)^{ - s} \le \delta \), and Proof of Theorem 1 is completed.
Proof of Theorem 2
Similar to Proof of Theorem 1, a sufficient condition for the exact recovery of the support set of the signal is that the procedure will not incorrectly eliminate a measurement, including the nonzero entries on any of the \( {\text{s}}_{0} \) steps. \( \tau_{s} \), \( {\text{s}} = 1,2, \ldots s_{0} \) denotes a series of thresholds, and
The total sensing energy constraint is
where the amplitude of the sensing vector \( \sqrt {\frac{Ms}{\gamma n}} \) is set. If we set \( \gamma = 1 + 2\alpha k/l_{0} \mathop \sum \limits_{s = 2}^{{s_{0} }} s2^{ - s} \), then \( \left\| A \right\|_{{{\text{fro}}}}^{2} = 2M \).
According to the previous analysis and the error metrics defined in Sect. 3:
The following steps are the same as Proof of Theorem 1; therefore, they are omitted due to space limitations.
Rights and permissions
About this article
Cite this article
Lin, Y., Hu, Q. Preprocessed Compressive Adaptive Sense and Search Algorithm. Circuits Syst Signal Process 38, 918–929 (2019). https://doi.org/10.1007/s00034-018-0894-5
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00034-018-0894-5