Skip to main content
Log in

Bayesian Sparse Reconstruction Method of Compressed Sensing in the Presence of Impulsive Noise

  • Published:
Circuits, Systems, and Signal Processing Aims and scope Submit manuscript

Abstract

The majority of existing recovery algorithms in the framework of compressed sensing are not robust to the impulsive noise. However, the impulsive noise is always present in the actual communication and signal processing system. In this paper, we propose a method named ‘Bayesian sparse reconstruction’ to recover the sparse signal from the measurement vector which is corrupted by the impulsive noise. The Bayesian sparse reconstruction method is composed of five parts, which are the preliminary detection of the location set of impulses, the impulsive noise fast relevance vector machine algorithm, the step of pruning, Bayesian impulse detection algorithm and the maximum a posteriori estimate of the sparse vector. The Bayesian sparse reconstruction method can achieve effective signal recovery in the presence of impulsive noise, depending on the mutual influence of the impulsive noise fast relevance vector machine algorithm, the step of pruning and the Bayesian impulse detection algorithm. Experimental results show that the Bayesian sparse reconstruction method is robust to the impulsive noise and effective in the additive white Gaussian noise environment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. R.G. Baraniuk, Compressive sensing. IEEE Signal Process. Mag. 24(4), 118–121 (2007)

    Article  Google Scholar 

  2. T. Blumensath, M.E. Davies, Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  3. T. Blumensath, M.E. Davies, Normalized iterative hard thresholding: guaranteed stability and performance. IEEE J. Sel. Top. Signal Process. 4(2), 298–309 (2010)

    Article  Google Scholar 

  4. T. Blumensath, Accelerated iterative hard thresholding. Signal Process. 92(3), 752–756 (2012)

    Article  Google Scholar 

  5. E.J. Candès, The restricted isometry property and its implications for compressed sensing. C. R. Acad. Sci., Ser. 1 Math. 2160(346), 589–592 (2008)

    Article  Google Scholar 

  6. E.J. Candès, T. Tao, Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)

    Article  MATH  Google Scholar 

  7. E.J. Candès, T. Tao, The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat. 35(6), 2313–2351 (2007)

    Article  MATH  Google Scholar 

  8. R.E. Carrillo, K.E. Barner, T.C. Aysal, Robust sampling and reconstruction methods for sparse signals in the presence of impulsive noise. IEEE J. Sel. Top. Signal Process. 4(2), 392–408 (2010)

    Article  Google Scholar 

  9. R.E. Carrillo, K.E. Barner, Lorentzian based iterative hard thresholding for compressed sensing, in The IEEE International Conference on Acoustics, Speech and Signal Processing, Prague, Czech Republic (2011), pp. 3664–3667

    Google Scholar 

  10. S.S. Chen, D.L. Donoho, M.A. Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1998)

    Article  MathSciNet  Google Scholar 

  11. W. Dai, O. Milenkovic, Subspace pursuit for compressed sensing signal reconstruction. IEEE Trans. Inf. Theory 55(5), 2230–2249 (2009)

    Article  MathSciNet  Google Scholar 

  12. M.A. Davenport, Random observations on random observations: sparse signal acquisition and processing. Ph.D. thesis, Rice University (2010)

  13. M.A. Davenport, J.N. Laska, P.T. Boufounos, A simple proof that random matrices are democratic. Constr. Approx. 28(3), 253–263 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  14. D.L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  Google Scholar 

  15. D.L. Donoho, Y. Tsaig, Extension of compressed sensing. Signal Process. 86(3), 533–548 (2006)

    Article  MATH  Google Scholar 

  16. D.L. Donoho, Y. Tsaig, J.L. Strack, Sparse solution of underdetermined linear equation by stagewise orthogonal matching pursuit. IEEE Trans. Inf. Theory 58(2), 1094–1121 (2012)

    Article  Google Scholar 

  17. M.A.T. Figueiredo, R.D. Nowak, S.J. Wright, Gradient projection for sparse reconstruction. IEEE J. Sel. Top. Signal Process. 1(4), 586–597 (2007)

    Article  Google Scholar 

  18. D. Gervini, V.J. Yohai, A class of robust and fully efficient regression estimators. Ann. Stat. 30(2), 583–616 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  19. S.H. Ji, Y. Xue, L. Carin, Bayesian compressive sensing. IEEE Trans. Signal Process. 56(6), 2346–2356 (2008)

    Article  MathSciNet  Google Scholar 

  20. Y.Y. Ji, Z. Yang, Bayesian compressed sensing for Gaussian sparse signals in the presence of impulsive noise. Acta Electron. Sin. 41(2), 363–370 (2013), in Chinese

    Google Scholar 

  21. S.M. Kay, Fundamentals of Statistical Signal Processing, Volume 1: Estimation Theory (Prentice Hall, New York, 1998)

    Google Scholar 

  22. J.N. Laska, M.A. Davenport, R.G. Baraniuk, Exact signal recovery from sparse corrupted measurements through the justice pursuit, in The 43rd Asilomar Conference on Signals, Systems and Computers (2009), pp. 1556–1560

    Google Scholar 

  23. J.N. Laska, P.T. Boufounos, M.A. Davenport, R.G. Baraniuk, Democracy in action: quantization, saturation and compressive sensing. Appl. Comput. Harmon. Anal. 31(3), 429–443 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  24. R.A. Maronna, R.D. Martin, V.J. Yohai, Robust Statistics: Theory and Methods (Wiley, New York, 2006)

    Book  Google Scholar 

  25. D. Needell, R. Vershynin, Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE J. Sel. Top. Signal Process. 4(2), 310–316 (2010)

    Article  Google Scholar 

  26. D. Needle, J.A. Tropp, CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009)

    Article  MathSciNet  Google Scholar 

  27. D.S. Pham, S. Venkatesh, Improved image recovery from compressed data contaminated with impulsive noise. IEEE Trans. Image Process. 21(1), 397–405 (2012)

    Article  MathSciNet  Google Scholar 

  28. P. Schniter, L.C. Potter, J. Ziniel, Fast Bayesian matching pursuit, in Information Theory and Applications Workshop—Conference Proceedings, San Diego, CA, USA (2008), pp. 326–333

    Google Scholar 

  29. C. Studer, P. Kuppinger, G. Pope, H. Bolcskei, Recovery of sparsely corrupted signals. IEEE Trans. Inf. Theory 58(5), 3115–3130 (2012)

    Article  MathSciNet  Google Scholar 

  30. R. Tibshirani, Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 58, 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  31. M.E. Tipping, Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 1, 211–244 (2001)

    MathSciNet  MATH  Google Scholar 

  32. M.E. Tipping, The relevance vector machine. Adv. Neural Inf. Process. Syst. 12, 652–658 (2000)

    Google Scholar 

  33. M.E. Tipping, A.C. Faul, Fast marginal likelihood maximization for sparse Bayesian models, in Proceedings of the 9th International Workshop on Artificial Intelligence and Statistics (2003)

    Google Scholar 

  34. J. Tropp, A. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 53, 4655–4666 (2007)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by National Science Foundation of China (Grant Nos. 60971129, 61201326 and 61271335), the National Research Program of China (973 Program) (Grant No. 2011CB302303), Scientific Research Foundation of NUPT (Grant No. NY211039), Natural Science Fund for Higher Education of Jiangsu Province (Grant No. 12KJB510021) and the Scientific Innovation Research Programs of College Graduate in Jiangsu Province (Grant Nos. CXLX11_0408 and CXZZ12_0473).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yunyun Ji.

Appendices

Appendix A

(1) Derivation of Eq. (73):

When the index i∈{1,2,…,M} is added to the preliminary candidate set F to constitute the new candidate set \(\tilde{F}\), the matrix \(\tilde{\boldsymbol{\vartheta}}\) can be represented as

$$ \tilde{\boldsymbol{\vartheta}} = \bar{\boldsymbol{\varPhi}} _{\tilde{F}}\bar{\boldsymbol{A}}^{ - 1}\bar{\boldsymbol{\varPhi}} _{\tilde{F}}^{\mathrm{T}} + \sigma _{o}^{2} \boldsymbol{I}. $$
(95)

The relationship between the matrix \(\tilde{\boldsymbol{\vartheta}}\) and the matrix ϑ is

$$ \tilde{\boldsymbol{\vartheta}} = \left [ \arraycolsep=5pt \begin{array}{@{}cc@{}} \boldsymbol{\vartheta}& \bar{\boldsymbol{\varPhi}} _{F}\bar{\boldsymbol{A}}^{ - 1}\boldsymbol{\varphi} _{i}^{\mathrm{T}} \\[3pt] \boldsymbol{\varphi}_{i}\bar{\boldsymbol{A}}^{ - 1}\bar{\boldsymbol{\varPhi}} _{F}^{\mathrm{T}} & \boldsymbol{\varphi} _{i}\bar{\boldsymbol{A}}^{ - 1}\boldsymbol{\varphi} _{i}^{\mathrm{T}} + \sigma_{o}^{2} \end{array} \right ]. $$
(96)

Then we can apply property of the matrix inversion in a block form to the matrix \(\tilde{\boldsymbol{\vartheta}}\) in Eq. (96) and the inverse of the matrix \(\tilde{\vartheta}\) can also be represented in block form as

$$\tilde{\boldsymbol{\vartheta}} ^{ - 1} = \left [ \arraycolsep=5pt \begin{array}{@{}cc@{}} \boldsymbol{X}_{1} & \boldsymbol{X}_{2} \\ \boldsymbol{X}_{3} & \boldsymbol{X}_{4} \end{array} \right ] $$

where

and

$$\boldsymbol{X}_{3} = - \boldsymbol{X}_{4}\boldsymbol{ \varphi}_{i}\bar{\boldsymbol{A}}^{ - 1}\bar{\boldsymbol{ \varPhi}} _{F}^{\mathrm{T}}\boldsymbol{\vartheta} ^{ - 1}. $$

Combining Eqs. (71) and (72), we have Eq. (73) established.

(2) Derivation of Eq. (76):

The update matrix of \(\tilde{\boldsymbol{\varTheta}}\) can be written as

$$ \tilde{\boldsymbol{\varTheta}} \stackrel{\Delta}{=} \boldsymbol{I} - \bar{\boldsymbol{\varPhi}} _{\tilde {F}}^{\mathrm{T}}\tilde{\boldsymbol{ \vartheta}} ^{ - 1}\bar{\boldsymbol{\varPhi}} _{\tilde{F}}\bar{ \boldsymbol{A}}^{ - 1}. $$
(97)

We apply the matrix \(\tilde{\boldsymbol{\vartheta}} ^{ - 1}\)in Eq. (73) to Eq. (97) and obtain

(98)

Combining Eqs. (74) and (75), we have Eq. (76) established.

(3) Derivation of Eq. (78):

In order to derive Eq. (78), we first introduce an intermediate variable which can be denoted as

$$ \boldsymbol{W}_{i} = \boldsymbol{I} + \beta_{2} \bar{\boldsymbol{\varPhi}} _{\bar{F}}^{\mathrm{T}} \bar{\boldsymbol{ \varPhi}} _{\bar{F}}\bar{\boldsymbol{A}}^{ - 1}\tilde{\boldsymbol{ \varTheta}}. $$
(99)

Applying Eq. (76) to Eq. (99) we have

(100)

In line with the matrix inversion lemma, we have Eq. (77) established.

Then the matrix \(\tilde{\boldsymbol{w}}\) can be represented as

$$ \tilde{\boldsymbol{w}} = \boldsymbol{W}_{i} - \beta_{2}\boldsymbol{\varphi}_{i}^{\mathrm{T}} \boldsymbol{ \varphi}_{i}\bar{\boldsymbol{A}}^{ - 1}\tilde{\boldsymbol{ \varTheta}}. $$
(101)

Applying the matrix inversion lemma to Eq. (101) we have Eq. (78) established.

(4) Derivation of Eq. (79):

The determinant of the covariance matrix C 1 satisfies

$$ \det(\boldsymbol{C}_{1}) = \sigma_{g}^{2L_{2}} \det(\boldsymbol{\vartheta})\det(\boldsymbol{w}). $$
(102)

In line with Eq. (96), we have

$$\det(\tilde{\boldsymbol{\vartheta}} ) = \det(\boldsymbol{\vartheta}) \bigl( \boldsymbol{\varphi}_{i}\bar{\boldsymbol{A}}^{ - 1}\boldsymbol{ \varphi}_{i}^{\mathrm{T}} + \sigma _{o}^{2} - \boldsymbol{\varphi}_{i}\bar{\boldsymbol{A}}^{ - 1}\bar{ \boldsymbol{\varPhi}} _{F}^{\mathrm{T}} \boldsymbol{ \vartheta}^{ - 1}\bar{\boldsymbol{\varPhi}} _{F}\bar{ \boldsymbol{A}}^{ - 1}\boldsymbol{\varphi}_{i}^{\mathrm{T}} \bigr) = b_{i}^{ - 1}\det(\boldsymbol{\vartheta}). $$

In line with Eq. (101), we have

$$ \det(\tilde{\boldsymbol{w}}) = \det(\boldsymbol{W}_{i}) \bigl(1 - \beta_{2}\boldsymbol{\varphi}_{i}\bar{ \boldsymbol{A}}^{ - 1}\tilde{\boldsymbol{\varTheta}} \boldsymbol{W}_{i}^{ - 1} \boldsymbol{\varphi}_{i}^{\mathrm{T}}\bigr). $$
(103)

In line with Eq. (100), we have

(104)

Combining Eq. (103) with Eq. (104), we have

Therefore,

Appendix B

(1) Derivation of Eq. (86):

When the index i∈{1,2,…,M} is added to the preliminary candidate set F to constitute the new candidate set \(\tilde{F}\), the matrix \(\tilde{\boldsymbol{\varXi}}\) can be represented as

$$ \tilde{\boldsymbol{\varXi}} = \bar{\boldsymbol{\varPhi}} _{\bar{\tilde{F}}}^{\mathrm{T}}\tilde{\boldsymbol{Z}}\bar{\boldsymbol{\varPhi}} _{\bar{\tilde{F}}} = \beta _{2}\bar{\boldsymbol{\varPhi}} _{\bar{\tilde{F}}}^{\mathrm{T}}\bar{\boldsymbol{\varPhi}} _{\bar{\tilde{F}}} - \beta_{2}^{2}\bar{\boldsymbol{\varPhi}} _{\bar{\tilde{F}}}^{\mathrm{T}} \bar{\boldsymbol{\varPhi}} _{\bar{\tilde{F}}}\bar{\boldsymbol{A}}^{ - 1}\tilde {\boldsymbol{\varTheta}} \tilde{\boldsymbol{w}}^{ - 1}\bar{\boldsymbol{ \varPhi}} _{\bar{\tilde{F}}}^{\mathrm{T}}\bar{\boldsymbol{\varPhi}} _{\bar{\tilde{F}}}. $$
(105)

Applying Eq. (77) to Eq. (78) we have

In line with Eq. (84), we have

$$ \tilde{\boldsymbol{w}}^{ - 1} = \boldsymbol{w}^{ - 1} + \boldsymbol{f}_{i}. $$
(106)

Moreover,

$$ \boldsymbol{\varPhi}_{\bar{\tilde{F}}}^{\mathrm{T}}\boldsymbol{ \varPhi}_{\bar{\tilde{F}}} = \boldsymbol{\varPhi}_{\bar {F}}^{\mathrm{T}} \boldsymbol{\varPhi}_{\bar{F}} - \boldsymbol{\varphi} _{i}^{\mathrm{T}} \boldsymbol{\varphi}_{i}. $$
(107)

Applying Eqs. (76), (106) and (107) to Eq. (105) we have Eq. (86) established.

(2) Derivation of Eq. (88):

The matrix \(\tilde{\boldsymbol{G}}\) can be represented as

$$ \tilde{\boldsymbol{G}} = \boldsymbol{y}_{\tilde{F}}^{\mathrm{T}} \tilde{\boldsymbol{\vartheta}} ^{ - 1}\bar{\boldsymbol{\varPhi}} _{\tilde{F}}. $$
(108)

Applying Eq. (73) to Eq. (108) we have

$$\tilde{\boldsymbol{G}} = \left [ \begin{array}{c} \boldsymbol{y}_{F} \\ y_{i} \end{array} \right ]^{\mathrm{T}}\left [ \arraycolsep=5pt \begin{array}{@{}cc@{}} \boldsymbol{\vartheta}^{ - 1} + b_{i}\boldsymbol{d}_{i}\boldsymbol{d}_{i}^{\mathrm{T}} & - b_{i}\boldsymbol{d}_{i} \\ -b_{i}\boldsymbol{d}_{i}^{\mathrm{T}} & b_{i} \end{array} \right ] \left [ \begin{array}{c} \bar{\boldsymbol{\varPhi}} _{F} \\ \boldsymbol{\varphi}_{i} \end{array} \right ] = \boldsymbol{G} + \boldsymbol{g}_{i}. $$

Thus, we have Eq. (88) established.

(3) Derivation of Eq. (90):

The matrix \(\tilde{\boldsymbol{R}}\) can be represented as

$$ \tilde{\boldsymbol{R}} = \boldsymbol{y}_{\bar{\tilde{F}}}^{\mathrm{T}} \tilde{\boldsymbol{Z}}\bar{\boldsymbol{\varPhi}} _{\bar{\tilde {F}}} = \beta _{2}\boldsymbol{y}_{\bar{\tilde{F}}}^{\mathrm{T}}\bar{\boldsymbol{ \varPhi}} _{\bar{\tilde{F}}} - \beta _{2}^{2} \boldsymbol{y}_{\bar{\tilde{F}}}^{\mathrm{T}}\bar{\boldsymbol{\varPhi}} _{\bar{\tilde{F}}}\bar{\boldsymbol{A}}^{ - 1}\tilde{\boldsymbol{\varTheta}} \tilde{\boldsymbol{w}}^{ - 1}\bar{\boldsymbol{\varPhi}} _{\bar{\tilde{F}}}^{\mathrm{T}}\bar{\boldsymbol{\varPhi}} _{\bar{\tilde{F}}}. $$
(109)

Moreover, we have

$$ \boldsymbol{y}_{\bar{\tilde{F}}}^{\mathrm{T}}\boldsymbol{ \varPhi}_{\bar{\tilde{F}}} = \boldsymbol{y}_{\bar{F}}^{\mathrm{T}}\boldsymbol{ \varPhi} _{\bar{F}} - y_{i}\boldsymbol{\varphi} _{i}. $$
(110)

Applying Eq. (76), (106), (107) and (110) to Eq. (109), we have Eq. (90) established.

(4) Derivation of Eqs. (91)–(93):

We define

(111)
(112)

and

$$ \tau_{3}(F) = - \boldsymbol{y}_{\bar{F}}^{\mathrm{T}} \boldsymbol{Zy}_{\bar{F}} = - \beta_{2}\boldsymbol{y}_{\bar {F}}^{\mathrm{T}} \boldsymbol{y}_{\bar{F}} + \beta _{2}^{2} \boldsymbol{y}_{\bar{F}}^{\mathrm{T}}\bar{\boldsymbol{\varPhi}} _{\bar{F}}\bar{\boldsymbol{A}}^{ - 1} \boldsymbol{\varTheta} \boldsymbol{w}^{ - 1}\boldsymbol{\varPhi} _{\bar{F}}^{\mathrm{T}}\boldsymbol{y}_{\bar{F}}. $$
(113)

Moreover, we have

$$\rho_{2}(F) = \tau_{1}(F) + \tau_{2}(F) + \tau_{3}(F). $$

In addition, we define

$$ pv_{1} = \tau_{1}(\tilde{F}) - \tau_{1}(F). $$
(114)

The quantity \(\tau_{1}(\tilde{F})\) can be denoted as

$$\tau_{1}(\tilde{F}) = - \bigl(\boldsymbol{y}_{\tilde{F}}^{\mathrm{T}} \tilde{\boldsymbol{\vartheta}} ^{ - 1}\boldsymbol{y}_{\tilde{F}} + \tilde{\boldsymbol{G}}\bar{\boldsymbol{A}}^{ - 1}\tilde{\boldsymbol{ \varXi}} \bar{\boldsymbol{A}}^{ - 1}\tilde{\boldsymbol{G}}^{\mathrm{T}} \bigr). $$

The quantity \(\boldsymbol{y}_{ \tilde{F}}^{\mathrm{T}}\tilde{\boldsymbol{\vartheta}} ^{ - 1}\boldsymbol{y}_{\tilde{F}}\) can be represented as

(115)

Applying Eqs. (86), (88) and (115) to Eq. (114) we have Eq. (91) established.

We define

$$pv_{2} = \tau_{2}(\tilde{F}) - \tau_{2}(F). $$

Thus, we have

$$pv_{2} = 2\tilde{\boldsymbol{R}}\bar{\boldsymbol{A}}^{ - 1} \tilde{\boldsymbol{G}}^{\mathrm{T}} - 2\boldsymbol{R}\bar{\boldsymbol{A}}^{ - 1} \boldsymbol{G}^{\mathrm{T}} = 2\bigl(\boldsymbol{r}_{i}\bar{ \boldsymbol{A}}^{ - 1}\tilde{\boldsymbol{G}}^{\mathrm{T}} + \boldsymbol{R} \bar{\boldsymbol{A}}^{ - 1}\boldsymbol{g}_{i}^{\mathrm{T}} \bigr). $$

We define

$$ pv_{3} = \tau_{3}(\tilde{F}) - \tau_{3}(F). $$
(116)

The quantity \(\tau_{3}(\tilde{F})\) can be denoted as

$$\tau_{3}(\tilde{F}) = - \beta_{2}\boldsymbol{y}_{\bar{\tilde{F}}}^{\mathrm{T}} \boldsymbol{y}_{\bar{\tilde{F}}} + \beta _{2}^{2} \boldsymbol{y}_{\bar{\tilde{F}}}^{\mathrm{T}} \bar{\boldsymbol{\varPhi}}_{\bar{\tilde{F}}}\,\bar{\boldsymbol{A}}^{ -1}\tilde{\boldsymbol{\varTheta}} \tilde{\boldsymbol{w}}^{- 1}\bar{\boldsymbol{\varPhi}} _{\bar{\tilde{F}}}^{\mathrm{T}}\boldsymbol{y}_{\bar{\tilde{F}}}. $$

Moreover, we have

$$ \boldsymbol{y}_{\bar{\tilde{F}}}^{\mathrm{T}}\boldsymbol{y}_{\bar{\tilde{F}}} = \boldsymbol{y}_{\bar{F}}^{\mathrm{T}}\boldsymbol{y}_{\bar{F}} - y_{i}^{2}. $$
(117)

Applying Eqs. (76), (106), (110) and (117) to Eq. (116) we have Eq. (93) established.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ji, Y., Yang, Z. & Li, W. Bayesian Sparse Reconstruction Method of Compressed Sensing in the Presence of Impulsive Noise. Circuits Syst Signal Process 32, 2971–2998 (2013). https://doi.org/10.1007/s00034-013-9605-4

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00034-013-9605-4

Keywords

Navigation