Skip to main content

Compressive Sensing

  • Reference work entry

Abstract

Compressive sensing is a recent type of sampling theory, which predicts that sparse signals and images can be reconstructed from what was previously believed to be incomplete information. As a main feature, efficient algorithms such as 1-minimization can be used for recovery. The theory has many potential applications in signal processing and imaging. This chapter gives an introduction and overview on both theoretical and numerical aspects of compressive sensing.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   1,200.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD   549.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Achlioptas, D.: Database-friendly random projections. In: Proceedings of the 20th Annual ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, Santa Barbara, pp. 274–281 (2001)

    Google Scholar 

  2. Affentranger, F., Schneider, R.: Random projections of regular simplices. Discret. Comput. Geom. 7(3), 219–226 (1992)

    MATH  MathSciNet  Google Scholar 

  3. Ailon, N., Liberty, E.: Almost optimal unrestricted fast Johnson-Lindenstrauss transform. In: Symposium on Discrete Algorithms (SODA), San Francisco, (2011)

    Google Scholar 

  4. Alexeev, B., Bandeira, A.S., Fickus, M., Mixon, D.G.: Phase retrieval with polarization (2012). arXiv:1210.7752

    Google Scholar 

  5. Balan, R., Casazza, P., Edidin, D.: On signal reconstruction without phase. Appl. Comput. Harmon. Anal. 20(3), 345–356 (2006)

    MATH  MathSciNet  Google Scholar 

  6. Baraniuk, R.: Compressive sensing. IEEE Signal Process. Mag. 24(4), 118–121 (2007)

    Google Scholar 

  7. Baraniuk, R.G., Davenport, M., DeVore, R.A., Wakin, M.: A simple proof of the restricted isometry property for random matrices. Constr. Approx. 28(3), 253–263 (2008)

    MATH  MathSciNet  Google Scholar 

  8. Bauschke, H.H., Combettes, P.-L., Luke, D.R.: Hybrid projection-reflection method for phase retrieval. J. Opt. Soc. Am. A 20(6), 1025–1034 (2003)

    Google Scholar 

  9. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    MATH  MathSciNet  Google Scholar 

  10. Beck, A., Teboulle, M.: Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 18(11), 2419–2434 (2009)

    MathSciNet  Google Scholar 

  11. Berinde, R., Gilbert, A.C., Indyk, P., Karloff, H., Strauss, M.: Combining geometry and combinatorics: a unified approach to sparse signal recovery. In: Proceedings of the 46th Annual Allerton Conference on Comunication, Control, and Computing 2008, Urbana, pp. 798–805. IEEE (2008)

    Google Scholar 

  12. Blanchard, J.D., Cartis, C., Tanner, J., Thompson, A.: Phase transitions for greedy sparse approximation algorithms. Appl. Comput. Harmon. Anal. 30(2), 188–203 (2011)

    MATH  MathSciNet  Google Scholar 

  13. Blumensath, T., Davies, M.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009)

    MATH  MathSciNet  Google Scholar 

  14. Bobin, J., Starck, J.-L., Ottensamer, R.: Compressed sensing in astronomy. IEEE J. Sel. Top. Signal Process. 2(5), 718–726 (2008)

    Google Scholar 

  15. Bourgain, J., Dilworth, S., Ford, K., Konyagin, S., Kutzarova, D.: Breaking the k 2-barrier for explicit RIP matrices. In: STOC’11, San Jose, pp. 637–644 (2011)

    Google Scholar 

  16. Bourgain, J., Dilworth, S., Ford, K., Konyagin, S., Kutzarova, D.: Explicit constructions of RIP matrices and related problems. Duke Math. J. 159(1), 145–185 (2011)

    MATH  MathSciNet  Google Scholar 

  17. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge/New York (2004)

    MATH  Google Scholar 

  18. Bungartz, H.-J., Griebel, M.: Sparse grids. Acta Numer. 13, 147–269 (2004)

    MathSciNet  Google Scholar 

  19. Cai, T., Zhang, A.: Sparse representation of a polytope and recovery of sparse signals and low-rank matrices. IEEE Trans. Inf. Theory 60(1), 122–132 (2014)

    MathSciNet  Google Scholar 

  20. Cai, J.-F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)

    MATH  MathSciNet  Google Scholar 

  21. Candès, E.J.: Compressive sampling. In: Proceedings of the International Congress of Mathematicians, Madrid (2006)

    Google Scholar 

  22. Candès, E.J.: The restricted isometry property and its implications for compressed sensing. C. R. Acad. Sci. Paris Ser. I Math. 346, 589–592 (2008)

    MATH  Google Scholar 

  23. Candès, E.J., Li, X.: Solving quadratic equations via PhaseLift when there are about as many equations as unknowns. Found. Comput. Math. 14(5), 1017–1026 (2014)

    MATH  MathSciNet  Google Scholar 

  24. Candès, E.J., Plan, Y.: Tight oracle bounds for low-rank matrix recovery from a minimal number of random measurements. IEEE Trans. Inf. Theory 57(4), 2342–2359 (2011)

    Google Scholar 

  25. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2009)

    MATH  MathSciNet  Google Scholar 

  26. Candès, E.J., Tao, T.: Near optimal signal recovery from random projections: universal encoding strategies? IEEE Trans. Inf. Theory 52(12), 5406–5425 (2006)

    MATH  Google Scholar 

  27. Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56(5), 2053–2080 (2010)

    Google Scholar 

  28. Candès, E., Wakin, M.: An introduction to compressive sampling. IEEE Signal Process. Mag. 25(2), 21–30 (2008)

    Google Scholar 

  29. Candès, E.J., Tao, T., Romberg, J.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)

    MATH  Google Scholar 

  30. Candès, E.J., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)

    MATH  Google Scholar 

  31. Candès, E., Li, X., Soltanolkotabi, M.: Phase retrieval from masked Fourier transforms (2013, preprint)

    Google Scholar 

  32. Candès, E.J., Strohmer, T., Voroninski, V.: PhaseLift: exact and stable signal recovery from magnitude measurements via convex programming. Commun. Pure Appl. Math. 66(8), 1241–1274 (2013)

    MATH  Google Scholar 

  33. Capalbo, M., Reingold, O., Vadhan, S., Wigderson, A.: Randomness conductors and constant-degree lossless expanders. In: Proceedings of the Thirty-Fourth Annual ACM Symposium on Theory of Computing, Montréal, pp. 659–668 (electronic). ACM (2002)

    Google Scholar 

  34. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120–145 (2011)

    MATH  MathSciNet  Google Scholar 

  35. Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1999)

    MATH  MathSciNet  Google Scholar 

  36. Christensen, O.: An Introduction to Frames and Riesz Bases. Applied and Numerical Harmonic Analysis. Birkhäuser, Boston (2003)

    MATH  Google Scholar 

  37. Cline, A.K.: Rate of convergence of Lawson’s algorithm. Math. Comput. 26, 167–176 (1972)

    MATH  MathSciNet  Google Scholar 

  38. Cohen, A., Dahmen, W., DeVore, R.A.: Compressed sensing and best k-term approximation. J. Am. Math. Soc. 22(1), 211–231 (2009)

    MATH  MathSciNet  Google Scholar 

  39. Combettes, P., Pesquet, J.-C.: A Douglas-Rachford splitting approach to nonsmooth convex variational signal recovery. IEEE J. Sel. Top. Signal Process. 1(4), 564–574 (2007)

    Google Scholar 

  40. Combettes, P., Pesquet, J.-C.: Proximal splitting methods in signal processing. In: Bauschke, H., Burachik, R., Combettes, P., Elser, V., Luke, D., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer, New York (2011)

    Google Scholar 

  41. Combettes, P., Wajs, V.: Signal recovery by proximal forward-backward splitting. Multisc. Model. Simul. 4(4), 1168–1200 (electronic) (2005)

    Google Scholar 

  42. Cormode, G., Muthukrishnan, S.: Combinatorial algorithms for compressed sensing. In: CISS, Princeton (2006)

    Google Scholar 

  43. Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57(11), 1413–1457 (2004)

    MATH  Google Scholar 

  44. Daubechies, I., DeVore, R., Fornasier, M., Güntürk, C.: Iteratively re-weighted least squares minimization for sparse recovery. Commun. Pure Appl. Math. 63(1), 1–38 (2010)

    MATH  Google Scholar 

  45. Davies, M., Gribonval, R.: Restricted isometry constants where p sparse recovery can fail for 0 < p ≤ 1. IEEE Trans. Inf. Theory 55(5), 2203–2214 (2009)

    MathSciNet  Google Scholar 

  46. Do, B., Indyk, P., Price, E., Woodruff, D.: Lower bounds for sparse recovery. In: Proceedings of the SODA, Austin (2010)

    Google Scholar 

  47. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)

    MATH  MathSciNet  Google Scholar 

  48. Donoho, D.L.: For most large underdetermined systems of linear equations the minimal l 1 solution is also the sparsest solution. Commun. Pure Appl. Anal. 59(6), 797–829 (2006)

    MATH  MathSciNet  Google Scholar 

  49. Donoho, D.L.: High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension. Discret. Comput. Geom. 35(4), 617–652 (2006)

    MATH  MathSciNet  Google Scholar 

  50. Donoho, D.L., Elad, M.: Optimally sparse representation in general (nonorthogonal) dictionaries via 1 minimization. Proc. Natl. Acad. Sci. USA 100(5), 2197–2202 (2003)

    MATH  MathSciNet  Google Scholar 

  51. Donoho, D.L., Huo, X.: Uncertainty principles and ideal atomic decompositions. IEEE Trans. Inf. Theory 47(7), 2845–2862 (2001)

    MATH  MathSciNet  Google Scholar 

  52. Donoho, D., Logan, B.: Signal recovery and the large sieve. SIAM J. Appl. Math. 52(2), 577–591 (1992)

    MATH  MathSciNet  Google Scholar 

  53. Donoho, D.L., Tanner, J.: Neighborliness of randomly projected simplices in high dimensions. Proc. Natl. Acad. Sci. USA 102(27), 9452–9457 (2005)

    MATH  MathSciNet  Google Scholar 

  54. Donoho, D.L., Tanner, J.: Counting faces of randomly-projected polytopes when the projection radically lowers dimension. J. Am. Math. Soc. 22(1), 1–53 (2009)

    MATH  MathSciNet  Google Scholar 

  55. Donoho, D.L., Tsaig, Y.: Fast solution of l1-norm minimization problems when the solution may be sparse. IEEE Trans. Inf. Theory 54(11), 4789–4812 (2008)

    MATH  MathSciNet  Google Scholar 

  56. Dorfman, R.: The detection of defective members of large populations. Ann. Stat. 14, 436–440 (1943)

    Google Scholar 

  57. Douglas, J., Rachford, H.: On the numerical solution of heat conduction problems in two or three space variables. Trans. Am. Math. Soc. 82, 421–439 (1956)

    MATH  MathSciNet  Google Scholar 

  58. Duarte, M., Davenport, M., Takhar, D., Laska, J., Ting, S., Kelly, K., Baraniuk, R.: Single-pixel imaging via compressive sampling. IEEE Signal Process. Mag. 25(2), 83–91 (2008)

    Google Scholar 

  59. Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. Ann. Stat. 32(2), 407–499 (2004)

    MATH  MathSciNet  Google Scholar 

  60. Ehler, M., Fornasier, M., Sigl, J.: Quasi-linear compressed sensing. Multiscale Model. Simul. 12(2), 725–754 (2014)

    MathSciNet  Google Scholar 

  61. Elad, M.: Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. Springer, New York (2010)

    Google Scholar 

  62. Elad, M., Bruckstein, A.M.: A generalized uncertainty principle and sparse representation in pairs of bases. IEEE Trans. Inf. Theory 48(9), 2558–2567 (2002)

    MATH  MathSciNet  Google Scholar 

  63. Eldar, Y., Kutyniok, G. (eds.): Compressed Sensing – Theory and Applications. Cambridge University Press, Cambridge/New York (2012)

    Google Scholar 

  64. Eldar, Y., Mendelson, S.: Phase retrieval: stability and recovery guarantees. Appl. Comput. Harmon. Anal. (to appear). doi:10.1016/j.acha.2013.08.003

    Google Scholar 

  65. Ender, J.: On compressive sensing applied to radar. Signal Process. 90(5), 1402–1414 (2010)

    MATH  Google Scholar 

  66. Fannjiang, A., Yan, P., Strohmer, T.: Compressed remote sensing of sparse objects. SIAM J. Imaging Sci. 3(3), 595–618 (2010)

    MATH  MathSciNet  Google Scholar 

  67. Fazel, M.: Matrix rank minimization with applications. PhD thesis, Stanford University (2002)

    Google Scholar 

  68. Fienup, J.R.: Phase retrieval algorithms: a comparison. Appl. Opt. 21(15), 2758–2769 (1982)

    Google Scholar 

  69. Fornasier, M.: Numerical methods for sparse recovery. In: Fornasier, M. (ed.) Theoretical Foundations and Numerical Methods for Sparse Recovery. Radon Series on Computational and Applied Mathematics, vol. 9, pp. 93–200. deGruyter, Berlin (2010). Papers based on the presentations of the summer school “Theoretical Foundations and Numerical Methods for Sparse Recovery”, Vienna, Austria, 31 Aug-4 Sept 2009

    Google Scholar 

  70. Fornasier, M., March, R.: Restoration of color images by vector valued BV functions and variational calculus. SIAM J. Appl. Math. 68(2), 437–460 (2007)

    MATH  MathSciNet  Google Scholar 

  71. Fornasier, M., Ramlau, R., Teschke, G.: The application of joint sparsity and total variation minimization algorithms to a real-life art restoration problem. Adv. Comput. Math. 31(1–3), 157–184 (2009)

    MATH  MathSciNet  Google Scholar 

  72. Fornasier, M., Langer, A., Schönlieb, C.: A convergent overlapping domain decomposition method for total variation minimization. Numer. Math. 116(4), 645–685 (2010)

    MATH  MathSciNet  Google Scholar 

  73. Fornasier, M., Rauhut, H., Ward, R.: Low-rank matrix recovery via iteratively reweighted least squares minimization. SIAM J. Optim. 21(4), 1614–1640 (2011)

    MATH  MathSciNet  Google Scholar 

  74. Foucart, S.: A note on guaranteed sparse recovery via 1-minimization. Appl. Comput. Harmon. Anal. 29(1), 97–103 (2010)

    MATH  MathSciNet  Google Scholar 

  75. Foucart, S., Lai, M.: Sparsest solutions of underdetermined linear systems via q -minimization for 0 < q ≤ 1. Appl. Comput. Harmon. Anal. 26(3), 395–407 (2009)

    MATH  MathSciNet  Google Scholar 

  76. Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Applied and Numerical Harmonic Analysis. Birkhäuser, Boston (2013)

    MATH  Google Scholar 

  77. Foucart, S., Pajor, A., Rauhut, H., Ullrich, T.: The Gelfand widths of p -balls for 0 < p ≤ 1. J. Complex. 26(6), 629–640 (2010)

    MATH  MathSciNet  Google Scholar 

  78. Fuchs, J.J.: On sparse representations in arbitrary redundant bases. IEEE Trans. Inf. Theory 50(6), 1341–1344 (2004)

    MATH  Google Scholar 

  79. Garnaev, A., Gluskin, E.: On widths of the Euclidean ball. Sov. Math. Dokl. 30, 200–204 (1984)

    MATH  Google Scholar 

  80. Gilbert, A.C., Muthukrishnan, S., Guha, S., Indyk, P., Strauss, M.: Near-optimal sparse Fourier representations via sampling. In: Proceedings of the STOC’02, Montréal, pp. 152–161. Association for Computing Machinery (2002)

    Google Scholar 

  81. Gilbert, A.C., Muthukrishnan, S., Strauss, M.J.: Approximation of functions over redundant dictionaries using coherence. In: Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, Baltimore, 12–14 Jan 2003, pp. 243–252. SIAM and Association for Computing Machinery (2003)

    Google Scholar 

  82. Gilbert, A.C., Strauss, M., Tropp, J.A., Vershynin, R.: One sketch for all: fast algorithms for compressed sensing. In: Proceedings of the 39th ACM Symposium Theory of Computing (STOC), San Diego (2007)

    Google Scholar 

  83. Glowinski, R., Le, T.: Augmented Lagrangian and Operator-Splitting Methods. SIAM, Philadelphia (1989)

    MATH  Google Scholar 

  84. Gluskin, E.: Norms of random matrices and widths of finite-dimensional sets. Math. USSR-Sb. 48, 173–182 (1984)

    MATH  Google Scholar 

  85. Goldfarb, D., Ma, S.: Convergence of fixed point continuation algorithms for matrix rank minimization. Found. Comput. Math. 11(2), 183–210 (2011)

    MATH  MathSciNet  Google Scholar 

  86. Gorodnitsky, I., Rao, B.: Sparse signal reconstruction from limited data using FOCUSS: a re-weighted minimum norm algorithm. IEEE Trans. Signal Process. 45(3), 600–616 (1997)

    Google Scholar 

  87. Gribonval, R., Nielsen, M.: Sparse representations in unions of bases. IEEE Trans. Inf. Theory 49(12), 3320–3325 (2003)

    MATH  MathSciNet  Google Scholar 

  88. Gross, D.: Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory 57(3), 1548–1566 (2011)

    Google Scholar 

  89. Gross, D., Liu, Y.-K., Flammia, S.T., Becker, S., Eisert, J.: Quantum state tomography via compressed sensing. Phys. Rev. Lett. 105, 150401 (2010)

    Google Scholar 

  90. Gross, D., Krahmer, F., Kueng, R.: Improved recovery guarantees for phase retrieval from coded diffraction patterns (2014, preprint)

    Google Scholar 

  91. Gross, D., Krahmer, F., Kueng, R.: A partial derandomization of PhaseLift using spherical designs. J. Fourier Anal. Appl. (to appear)

    Google Scholar 

  92. He, B., Yuan, X.: Convergence analysis of primal-dual algorithms for a saddle-point problem: from contraction perspective. SIAM J. Imaging Sci. 5(1), 119–149 (2012)

    MATH  MathSciNet  Google Scholar 

  93. Horn, R., Johnson, C.: Matrix Analysis. Cambridge University Press, Cambridge/New York (1990)

    MATH  Google Scholar 

  94. Hügel, M., Rauhut, H., Strohmer, T.: Remote sensing via l1-minimization. Found. Comput. Math. 14, 115–150 (2014)

    MATH  MathSciNet  Google Scholar 

  95. Johnson, W.B., Lindenstrauss, J. (eds.): Handbook of the Geometry of Banach Spaces, vol. I. North-Holland, Amsterdam (2001)

    MATH  Google Scholar 

  96. Kashin, B.: Diameters of some finite-dimensional sets and classes of smooth functions. Math. USSR Izv. 11, 317–333 (1977)

    MATH  Google Scholar 

  97. Keshavan, R.H., Montanari, A., Oh, S.: Matrix completion from a few entries. IEEE Trans. Inf. Theory 56, 2980–2998 (2010)

    MathSciNet  Google Scholar 

  98. Keshavan, R.H., Montanari, A., Oh, S.: Matrix completion from noisy entries. J. Mach. Learn. Res. 11, 2057–2078 (2010)

    MATH  MathSciNet  Google Scholar 

  99. Krahmer, F., Rauhut, H.: Structured random measurements in signal processing. GAMM Mitteilungen. (to appear)

    Google Scholar 

  100. Krahmer, F., Ward, R.: New and improved Johnson-Lindenstrauss embeddings via the restricted isometry property. SIAM J. Math. Anal. 43(3), 1269–1281 (2011)

    MATH  MathSciNet  Google Scholar 

  101. Krahmer, F., Mendelson, S., Rauhut, H.: Suprema of chaos processes and the restricted isometry property. Commun. Pure Appl. Math. (to appear). doi:10.1002/cpa.21504

    Google Scholar 

  102. Lawson, C.: Contributions to the theory of linear least maximum approximation. PhD thesis, University of California, Los Angeles (1961)

    Google Scholar 

  103. Ledoux, M., Talagrand, M.: Probability in Banach Spaces. Springer, Berlin/New York (1991)

    MATH  Google Scholar 

  104. Lee, K., Bresler, Y.: ADMiRA: atomic decomposition for minimum rank approximation. IEEE Trans. Inf. Theory 56(9), 4402–4416 (2010)

    MathSciNet  Google Scholar 

  105. Li, X., Voroninski, V.: Sparse signal recovery from quadratic measurements via convex programming (2013). arXiv:1209.4785

    Google Scholar 

  106. Logan, B.: Properties of high-pass signals. PhD thesis, Columbia University (1965)

    Google Scholar 

  107. Lorentz, G.G., von Golitschek, M., Makovoz, Y.: Constructive Approximation: Advanced Problems. Springer, Berlin (1996)

    MATH  Google Scholar 

  108. Mallat, S.G., Zhang, Z.: Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 41(12), 3397–3415 (1993)

    MATH  Google Scholar 

  109. Marple, S.: Digital Spectral Analysis with Applications. Prentice-Hall, Englewood Cliffs (1987)

    Google Scholar 

  110. Mendelson, S., Pajor, A., Tomczak Jaegermann, N.: Uniform uncertainty principle for Bernoulli and subgaussian ensembles. Constr. Approx. 28(3), 277–289 (2009)

    MathSciNet  Google Scholar 

  111. Millane, R.: Phase retrieval in crystallography and optics. J. Opt. Soc. Am. A 7(3), 394–411 (1990)

    Google Scholar 

  112. Mixon, D.: Short, fat matrices. Blog (2013)

    Google Scholar 

  113. Mohan, K., Fazel, M.: Reweighted nuclear norm minimization with application to system identification. In: Proceedings of the American Control Conference, Baltimore, pp. 2953–2959 (2010)

    Google Scholar 

  114. Natarajan, B.K.: Sparse approximate solutions to linear systems. SIAM J. Comput. 24, 227–234 (1995)

    MATH  MathSciNet  Google Scholar 

  115. Nesterov, Y., Nemirovskii, A.: Interior-Point Polynomial Algorithms in Convex Programming. Volume 13 of SIAM Studies in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (1994)

    Google Scholar 

  116. Netrapalli, P., Jain, P., Sanghavi, S.: Phase retrieval using alternating minimization (2013). arXiv:1306.0160

    Google Scholar 

  117. Novak, E.: Optimal recovery and n-widths for convex classes of functions. J. Approx. Theory 80(3), 390–408 (1995)

    MATH  MathSciNet  Google Scholar 

  118. Ohlsson, H., Yang, A.Y., Dong, R., Sastry, S.S.: Nonlinear basis pursuit. In: 47th Asilomar Conference on Signals, Systems and Computers, Pacific Grove (2013)

    Google Scholar 

  119. Osborne, M., Presnell, B., Turlach, B.: A new approach to variable selection in least squares problems. IMA J. Numer. Anal. 20(3), 389–403 (2000)

    MATH  MathSciNet  Google Scholar 

  120. Osborne, M., Presnell, B., Turlach, B.: On the LASSO and its dual. J. Comput. Graph. Stat. 9(2), 319–337 (2000)

    MathSciNet  Google Scholar 

  121. Oymak, S., Mohan, K., Fazel, M., Hassibi, B.: A simplified approach to recovery conditions for low-rank matrices. In: Proceedings of the IEEE International Symposium on Information Theory (ISIT), St. Petersburg (2011)

    Google Scholar 

  122. Pfander, G.E., Rauhut, H.: Sparsity in time-frequency representations. J. Fourier Anal. Appl. 16(2), 233–260 (2010)

    MATH  MathSciNet  Google Scholar 

  123. Pfander, G.E., Rauhut, H., Tanner, J.: Identification of matrices having a sparse representation. IEEE Trans. Signal Process. 56(11), 5376–5388 (2008)

    MathSciNet  Google Scholar 

  124. Pfander, G.E., Rauhut, H., Tropp, J.A.: The restricted isometry property for time-frequency structured random matrices. Probab. Theory Relat. Fields 156, 707–737 (2013)

    MATH  MathSciNet  Google Scholar 

  125. Pock, T., Chambolle, A.: Diagonal preconditioning for first order primal-dual algorithms in convex optimization. In: IEEE International Conference Computer Vision (ICCV), Barcelona, pp. 1762–1769 (2011)

    Google Scholar 

  126. Pock, T., Cremers, D., Bischof, H., Chambolle, A.: An algorithm for minimizing the Mumford-Shah functional. In: ICCV Proceedings, Kyoto. Springer (2009)

    Google Scholar 

  127. Prony, R.: Essai expérimental et analytique sur les lois de la Dilatabilité des uides élastique et sur celles de la Force expansive de la vapeur de loeau et de la vapeur de l’alkool, à différentes températures. J. École Polytechnique 1, 24–76 (1795)

    Google Scholar 

  128. Rauhut, H.: Random sampling of sparse trigonometric polynomials. Appl. Comput. Harmon. Anal. 22(1), 16–42 (2007)

    MATH  MathSciNet  Google Scholar 

  129. Rauhut, H.: Stability results for random sampling of sparse trigonometric polynomials. IEEE Trans. Inf Theory 54(12), 5661–5670 (2008)

    MATH  MathSciNet  Google Scholar 

  130. Rauhut, H.: Circulant and Toeplitz matrices in compressed sensing. In: Proceedings of the SPARS’09 (2009)

    Google Scholar 

  131. Rauhut, H.: Compressive sensing and structured random matrices. In: Fornasier, M. (ed.) Theoretical Foundations and Numerical Methods for Sparse Recovery. Radon Series on Computational and Applied Mathematics, vol. 9, pp. 1–92. deGruyter, Berlin (2010). Papers based on the presentations of the summer school “Theoretical Foundations and Numerical Methods for Sparse Recovery”, Vienna, Austria, 31 Aug-4 Sept 2009

    Google Scholar 

  132. Rauhut, H., Ward, R.: Interpolation via weighted l1 minimization (2013). ArXiv:1308.0759

    Google Scholar 

  133. Rauhut, H., Romberg, J.K., Tropp, J.A.: Restricted isometries for partial random circulant matrices. Appl. Comput. Harmon. Anal. 32(2), 242–254 (2012)

    MATH  MathSciNet  Google Scholar 

  134. Recht, B.: A simpler approach to matrix completion. J. Mach. Learn. Res. 12, 3413–3430 (2012)

    MathSciNet  Google Scholar 

  135. Recht, B., Fazel, M., Parrilo, P.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52(3), 471–501 (2010)

    MATH  MathSciNet  Google Scholar 

  136. Romberg, J.: Imaging via compressive sampling. IEEE Signal Process. Mag. 25(2), 14–20 (2008)

    Google Scholar 

  137. Romberg, J.K.: Compressive sensing by random convolution. SIAM J. Imaging Sci. 2(4), 1098–1128 (2009)

    MATH  MathSciNet  Google Scholar 

  138. Rudelson, M., Vershynin, R.: On sparse reconstruction from Fourier and Gaussian measurements. Commun. Pure Appl. Math. 61, 1025–1045 (2008)

    MATH  MathSciNet  Google Scholar 

  139. Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 60(1–4), 259–268 (1992)

    MATH  Google Scholar 

  140. Santosa, F., Symes, W.: Linear inversion of band-limited reflection seismograms. SIAM J. Sci. Stat. Comput. 7(4), 1307–1330 (1986)

    MATH  MathSciNet  Google Scholar 

  141. Schnass, K., Vandergheynst, P.: Dictionary preconditioning for greedy algorithms. IEEE Trans. Signal Process. 56(5), 1994–2002 (2008)

    MathSciNet  Google Scholar 

  142. Starck, J.-L., Murtagh, F., Fadili, J.: Sparse Image and Signal Processing Wavelets, Curvelets, Morphological Diversity, xvii, p. 316. Cambridge University Press, Cambridge (2010)

    Google Scholar 

  143. Strohmer, T., Heath, R.W., Jr.: Grassmannian frames with applications to coding and communication. Appl. Comput. Harmon. Anal. 14(3), 257–275 (2003)

    MATH  MathSciNet  Google Scholar 

  144. Strohmer, T., Hermann, M.: Compressed sensing radar. In: IEEE Proceedings of the International Conference on Acoustic, Speech, and Signal Processing, Las Vegas, pp. 1509–1512 (2008)

    Google Scholar 

  145. Tadmor, E.: Numerical methods for nonlinear partial differential equations. In: Meyers, R.A. (ed.) Encyclopedia of Complexity and Systems Science. Springer, New York/London (2009)

    Google Scholar 

  146. Talagrand, M.: Selecting a proportion of characters. Isr. J. Math. 108, 173–191 (1998)

    MATH  MathSciNet  Google Scholar 

  147. Tauböck, G., Hlawatsch, F., Eiwen, D., Rauhut, H.: Compressive estimation of doubly selective channels in multicarrier systems: leakage effects and sparsity-enhancing processing. IEEE J. Sel. Top. Signal Process. 4(2), 255–271 (2010)

    Google Scholar 

  148. Taylor, H., Banks, S., McCoy, J.: Deconvolution with the 1-norm. Geophys. J. Int. 44(1), 39–52 (1979)

    Google Scholar 

  149. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 58(1), 267–288 (1996)

    MATH  MathSciNet  Google Scholar 

  150. Traub, J., Wasilkowski, G., Wo’zniakowski, H.: Information-Based Complexity. Computer Science and Scientific Computing. Academic, Boston (1988)

    MATH  Google Scholar 

  151. Tropp, J.A.: Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50(10), 2231–2242 (2004)

    MATH  MathSciNet  Google Scholar 

  152. Tropp, J.A.: Just relax: convex programming methods for identifying sparse signals in noise. IEEE Trans. Inf. Theory 51(3), 1030–1051 (2006)

    MathSciNet  Google Scholar 

  153. Tropp, J., Needell, D.: CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009)

    MATH  MathSciNet  Google Scholar 

  154. Tropp, J.A., Laska, J.N., Duarte, M.F., Romberg, J.K., Baraniuk, R.G.: Beyond nyquist: efficient sampling of sparse bandlimited signals. IEEE Trans. Inf. Theory 56(1), 520–544 (2010)

    MathSciNet  Google Scholar 

  155. Unser, M.: Sampling—50 years after Shannon. Proc. IEEE 88(4), 569–587 (2000)

    Google Scholar 

  156. van den Berg, E., Friedlander, M.: Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890–912 (2008)

    MATH  MathSciNet  Google Scholar 

  157. Vybiral, J.: Widths of embeddings in function spaces. J. Complex. 24(4), 545–570 (2008)

    MATH  MathSciNet  Google Scholar 

  158. Wagner, G., Schmieder, P., Stern, A., Hoch, J.: Application of non-linear sampling schemes to cosy-type spectra. J. Biomol. NMR 3(5), 569 (1993)

    Google Scholar 

  159. Willett, R., Marcia, R., Nichols, J.: Compressed sensing for practical optical imaging systems: a tutorial. Opt. Eng. 50(7), 072601–072601–13 (2011)

    Google Scholar 

  160. Willett, R., Duarte, M., Davenport, M., Baraniuk, R.: Sparsity and structure in hyperspectral imaging: sensing, reconstruction, and target detection. IEEE Signal Proc. Mag. 31(1), 116–126 (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Massimo Fornasier .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer Science+Business Media New York

About this entry

Cite this entry

Fornasier, M., Rauhut, H. (2015). Compressive Sensing. In: Scherzer, O. (eds) Handbook of Mathematical Methods in Imaging. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-0790-8_6

Download citation

Publish with us

Policies and ethics