Abstract
The majority of existing recovery algorithms in the framework of compressed sensing are not robust to the impulsive noise. However, the impulsive noise is always present in the actual communication and signal processing system. In this paper, we propose a method named ‘Bayesian sparse reconstruction’ to recover the sparse signal from the measurement vector which is corrupted by the impulsive noise. The Bayesian sparse reconstruction method is composed of five parts, which are the preliminary detection of the location set of impulses, the impulsive noise fast relevance vector machine algorithm, the step of pruning, Bayesian impulse detection algorithm and the maximum a posteriori estimate of the sparse vector. The Bayesian sparse reconstruction method can achieve effective signal recovery in the presence of impulsive noise, depending on the mutual influence of the impulsive noise fast relevance vector machine algorithm, the step of pruning and the Bayesian impulse detection algorithm. Experimental results show that the Bayesian sparse reconstruction method is robust to the impulsive noise and effective in the additive white Gaussian noise environment.






Similar content being viewed by others
References
R.G. Baraniuk, Compressive sensing. IEEE Signal Process. Mag. 24(4), 118–121 (2007)
T. Blumensath, M.E. Davies, Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009)
T. Blumensath, M.E. Davies, Normalized iterative hard thresholding: guaranteed stability and performance. IEEE J. Sel. Top. Signal Process. 4(2), 298–309 (2010)
T. Blumensath, Accelerated iterative hard thresholding. Signal Process. 92(3), 752–756 (2012)
E.J. Candès, The restricted isometry property and its implications for compressed sensing. C. R. Acad. Sci., Ser. 1 Math. 2160(346), 589–592 (2008)
E.J. Candès, T. Tao, Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)
E.J. Candès, T. Tao, The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat. 35(6), 2313–2351 (2007)
R.E. Carrillo, K.E. Barner, T.C. Aysal, Robust sampling and reconstruction methods for sparse signals in the presence of impulsive noise. IEEE J. Sel. Top. Signal Process. 4(2), 392–408 (2010)
R.E. Carrillo, K.E. Barner, Lorentzian based iterative hard thresholding for compressed sensing, in The IEEE International Conference on Acoustics, Speech and Signal Processing, Prague, Czech Republic (2011), pp. 3664–3667
S.S. Chen, D.L. Donoho, M.A. Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1998)
W. Dai, O. Milenkovic, Subspace pursuit for compressed sensing signal reconstruction. IEEE Trans. Inf. Theory 55(5), 2230–2249 (2009)
M.A. Davenport, Random observations on random observations: sparse signal acquisition and processing. Ph.D. thesis, Rice University (2010)
M.A. Davenport, J.N. Laska, P.T. Boufounos, A simple proof that random matrices are democratic. Constr. Approx. 28(3), 253–263 (2008)
D.L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)
D.L. Donoho, Y. Tsaig, Extension of compressed sensing. Signal Process. 86(3), 533–548 (2006)
D.L. Donoho, Y. Tsaig, J.L. Strack, Sparse solution of underdetermined linear equation by stagewise orthogonal matching pursuit. IEEE Trans. Inf. Theory 58(2), 1094–1121 (2012)
M.A.T. Figueiredo, R.D. Nowak, S.J. Wright, Gradient projection for sparse reconstruction. IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007)
D. Gervini, V.J. Yohai, A class of robust and fully efficient regression estimators. Ann. Stat. 30(2), 583–616 (2002)
S.H. Ji, Y. Xue, L. Carin, Bayesian compressive sensing. IEEE Trans. Signal Process. 56(6), 2346–2356 (2008)
Y.Y. Ji, Z. Yang, Bayesian compressed sensing for Gaussian sparse signals in the presence of impulsive noise. Acta Electron. Sin. 41(2), 363–370 (2013), in Chinese
S.M. Kay, Fundamentals of Statistical Signal Processing, Volume 1: Estimation Theory (Prentice Hall, New York, 1998)
J.N. Laska, M.A. Davenport, R.G. Baraniuk, Exact signal recovery from sparse corrupted measurements through the justice pursuit, in The 43rd Asilomar Conference on Signals, Systems and Computers (2009), pp. 1556–1560
J.N. Laska, P.T. Boufounos, M.A. Davenport, R.G. Baraniuk, Democracy in action: quantization, saturation and compressive sensing. Appl. Comput. Harmon. Anal. 31(3), 429–443 (2011)
R.A. Maronna, R.D. Martin, V.J. Yohai, Robust Statistics: Theory and Methods (Wiley, New York, 2006)
D. Needell, R. Vershynin, Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE J. Sel. Top. Signal Process. 4(2), 310–316 (2010)
D. Needle, J.A. Tropp, CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009)
D.S. Pham, S. Venkatesh, Improved image recovery from compressed data contaminated with impulsive noise. IEEE Trans. Image Process. 21(1), 397–405 (2012)
P. Schniter, L.C. Potter, J. Ziniel, Fast Bayesian matching pursuit, in Information Theory and Applications Workshop—Conference Proceedings, San Diego, CA, USA (2008), pp. 326–333
C. Studer, P. Kuppinger, G. Pope, H. Bolcskei, Recovery of sparsely corrupted signals. IEEE Trans. Inf. Theory 58(5), 3115–3130 (2012)
R. Tibshirani, Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 58, 267–288 (1996)
M.E. Tipping, Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 1, 211–244 (2001)
M.E. Tipping, The relevance vector machine. Adv. Neural Inf. Process. Syst. 12, 652–658 (2000)
M.E. Tipping, A.C. Faul, Fast marginal likelihood maximization for sparse Bayesian models, in Proceedings of the 9th International Workshop on Artificial Intelligence and Statistics (2003)
J. Tropp, A. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 53, 4655–4666 (2007)
Acknowledgements
This work was supported by National Science Foundation of China (Grant Nos. 60971129, 61201326 and 61271335), the National Research Program of China (973 Program) (Grant No. 2011CB302303), Scientific Research Foundation of NUPT (Grant No. NY211039), Natural Science Fund for Higher Education of Jiangsu Province (Grant No. 12KJB510021) and the Scientific Innovation Research Programs of College Graduate in Jiangsu Province (Grant Nos. CXLX11_0408 and CXZZ12_0473).
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A
(1) Derivation of Eq. (73):
When the index i∈{1,2,…,M} is added to the preliminary candidate set F to constitute the new candidate set \(\tilde{F}\), the matrix \(\tilde{\boldsymbol{\vartheta}}\) can be represented as
The relationship between the matrix \(\tilde{\boldsymbol{\vartheta}}\) and the matrix ϑ is
Then we can apply property of the matrix inversion in a block form to the matrix \(\tilde{\boldsymbol{\vartheta}}\) in Eq. (96) and the inverse of the matrix \(\tilde{\vartheta}\) can also be represented in block form as
where

and
Combining Eqs. (71) and (72), we have Eq. (73) established.
(2) Derivation of Eq. (76):
The update matrix of \(\tilde{\boldsymbol{\varTheta}}\) can be written as
We apply the matrix \(\tilde{\boldsymbol{\vartheta}} ^{ - 1}\)in Eq. (73) to Eq. (97) and obtain

Combining Eqs. (74) and (75), we have Eq. (76) established.
(3) Derivation of Eq. (78):
In order to derive Eq. (78), we first introduce an intermediate variable which can be denoted as
Applying Eq. (76) to Eq. (99) we have

In line with the matrix inversion lemma, we have Eq. (77) established.
Then the matrix \(\tilde{\boldsymbol{w}}\) can be represented as
Applying the matrix inversion lemma to Eq. (101) we have Eq. (78) established.
(4) Derivation of Eq. (79):
The determinant of the covariance matrix C 1 satisfies
In line with Eq. (96), we have
In line with Eq. (101), we have
In line with Eq. (100), we have

Combining Eq. (103) with Eq. (104), we have

Therefore,

Appendix B
(1) Derivation of Eq. (86):
When the index i∈{1,2,…,M} is added to the preliminary candidate set F to constitute the new candidate set \(\tilde{F}\), the matrix \(\tilde{\boldsymbol{\varXi}}\) can be represented as
Applying Eq. (77) to Eq. (78) we have

In line with Eq. (84), we have
Moreover,
Applying Eqs. (76), (106) and (107) to Eq. (105) we have Eq. (86) established.
(2) Derivation of Eq. (88):
The matrix \(\tilde{\boldsymbol{G}}\) can be represented as
Applying Eq. (73) to Eq. (108) we have
Thus, we have Eq. (88) established.
(3) Derivation of Eq. (90):
The matrix \(\tilde{\boldsymbol{R}}\) can be represented as
Moreover, we have
Applying Eq. (76), (106), (107) and (110) to Eq. (109), we have Eq. (90) established.
(4) Derivation of Eqs. (91)–(93):
We define


and
Moreover, we have
In addition, we define
The quantity \(\tau_{1}(\tilde{F})\) can be denoted as
The quantity \(\boldsymbol{y}_{ \tilde{F}}^{\mathrm{T}}\tilde{\boldsymbol{\vartheta}} ^{ - 1}\boldsymbol{y}_{\tilde{F}}\) can be represented as

Applying Eqs. (86), (88) and (115) to Eq. (114) we have Eq. (91) established.
We define
Thus, we have
We define
The quantity \(\tau_{3}(\tilde{F})\) can be denoted as
Moreover, we have
Applying Eqs. (76), (106), (110) and (117) to Eq. (116) we have Eq. (93) established.
Rights and permissions
About this article
Cite this article
Ji, Y., Yang, Z. & Li, W. Bayesian Sparse Reconstruction Method of Compressed Sensing in the Presence of Impulsive Noise. Circuits Syst Signal Process 32, 2971–2998 (2013). https://doi.org/10.1007/s00034-013-9605-4
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00034-013-9605-4