Skip to main content
Log in

GPU accelerated greedy algorithms for compressed sensing

  • Full Length Paper
  • Published:
Mathematical Programming Computation Aims and scope Submit manuscript

Abstract

For appropriate matrix ensembles, greedy algorithms have proven to be an efficient means of solving the combinatorial optimization problem associated with compressed sensing. This paper describes an implementation for graphics processing units (GPU) of hard thresholding, iterative hard thresholding, normalized iterative hard thresholding, hard thresholding pursuit, and a two-stage thresholding algorithm based on compressive sampling matching pursuit and subspace pursuit. The GPU acceleration of the former bottleneck, namely the matrix–vector multiplications, transfers a significant portion of the computational burden to the identification of the support set. The software solves high-dimensional problems in fractions of a second which permits large-scale testing at dimensions currently unavailable in the literature. The GPU implementations exhibit up to 70\(\times \) acceleration over standard Matlab central processing unit implementations using automatic multi-threading.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. In our implementation, the number of elements in the index set is not necessarily exactly \(k\) based on the method of support set detection. This is discussed in detail in Sect. 2.1.

  2. The software does not require a fixed number of nonzeros per column when passing a known problem using a sparse matrix directly to the algorithms.

  3. While this software requires a CUDA enabled GPU, it is independent from Matlab and does not require the parallel processing toolbox.

  4. The random sparse matrix generator is designed to be fast for \(p\ll m\), and outside this regime can fail to construct the sparse matrix appropriately, resulting in a warning and the function terminating.

References

  1. Alabi, T., Blanchard, J., Gordon, B., Steinbach, R.: Fast k-selection algorithms for graphics processing units. ACM J. Exp. Algorithmics 17(2), 4.2:1–4.2:29 (2012) (Article 4.2)

    Google Scholar 

  2. Allison, D., Noga, M.: Selection by distributive partitioning. Inf. Process. Lett. 11(1), 7–8 (1980)

    Article  MATH  Google Scholar 

  3. Beliakov, G.: Parallel calculation of the median and order statistics on GPUs with application to robust regression. Computing Research Repository abs/1104.2 (2011)

  4. Berg, E.v.d., Friedlander, M.P.: Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890–912 (2008)

    Google Scholar 

  5. Blanchard, J.D., Tanner, J.: GAGA: GPU Accelerated Greedy Algorithms (2013). www.gaga4cs.org. Version 1.0.0

  6. Blanchard, J.D., Tanner, J.: Performance comparisons of greedy algorithms in compressed sensing. www.math.grinnell.edu/~blanchaj/PCGACS_preprint.pdf (2013, submitted)

  7. Blumensath, T., Davies, M.E.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  8. Blumensath, T., Davies, M.E.: Normalised iterative hard thresholding; guaranteed stability and performance. IEEE Select. Topics Signal Process. 4(2), 298–309 (2010)

    Article  Google Scholar 

  9. Candès, E.J.: Compressive sampling. In: International Congress of Mathematicians, Vol. III, pp. 1433–1452. Eur. Math. Soc., Zürich (2006)

  10. Candès, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)

    Article  MATH  Google Scholar 

  11. Candès, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inform. Theory 51(12), 4203–4215 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  12. Cevher, V.: An ALPS view of sparse recovery. In: 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5808–5811. IEEE (2011)

  13. Dai, W., Milenkovic, O.: Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inform. Theory 55(5), 2230–2249 (2009)

    Article  MathSciNet  Google Scholar 

  14. Donoho, D.L.: Neighborly polytopes and sparse solution of underdetermined linear equations: Technical Report. Stanford University, Department of Statistics (2004)

  15. Donoho, D.L.: Compressed sensing. IEEE Trans. Inform. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  Google Scholar 

  16. Donoho, D.L.: For most large underdetermined systems of equations, the minimal \(l_1\)-norm near-solution approximates the sparsest near-solution. Commun. Pure Appl. Math. 59(7), 907–934 (2006)

    Article  MathSciNet  Google Scholar 

  17. Donoho, D.L.: High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension. Discrete Comput. Geom. 35(4), 617–652 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  18. Donoho, D.L., Tanner, J.: Sparse nonnegative solution of underdetermined linear equations by linear programming. Proc. Natl. Acad. Sci. USA 102(27), 9446–9451 (2005) (electronic)

    Google Scholar 

  19. Donoho, D.L., Tsaig, Y.: Fast solution of l1 minimization problems when the solution may be sparse. IEEE Trans. Inform. Theory 54(11), 4789–4812 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  20. Donoho, D.L., Tsaig, Y., Drori, I., Stark, J.L.: Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit. IEEE Trans. Inform. Theory 58(2), 1094–1121 (2012)

    Article  MathSciNet  Google Scholar 

  21. Figueiredo, M.A.T., Nowak, R.D., Wright, S.J.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE Select. Topics Signal Process. 1(4), 586–597 (2007)

    Article  Google Scholar 

  22. Foucart, S.: Hard thresholding pursuit: an algorithm for compressive sensing. SIAM J. Numer. Anal. 49(6), 2543–2563 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  23. Hoberock, J., Bell, N.: Thrust: A parallel template library (2010). http://www.meganewtons.com/. Version 1.3.0, http://www.meganewtons.com/

  24. Kyrillidis, A., Cevher, V.: Recipes on hard thresholding methods. In: 2011 4th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), pp. 353–356. IEEE (2011)

  25. Lee, S., Wright, S.J.: Implementing algorithms for signal and image reconstruction on graphical processing units (2008). http://pages.cs.wisc.edu/swright/GPUreconstruction/gpu_image.pdf

  26. Mallat, S., Zhang, Z.: Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 41(12), 3397–3415 (1993)

    Article  MATH  Google Scholar 

  27. Merrill, D., Grimshaw, A.: High performance and scalable radix sorting: a case study of implementing dynamic parallelism for GPU computing. Parallel Process. Lett. 21(02), 245–272 (2011)

    Article  MathSciNet  Google Scholar 

  28. Monroe, L., Wendelberger, J., Michalak, S.: Randomized selection on the GPU. In: Proceedings of the ACM SIGGRAPH Symposium on High Performance Graphics, HPG ’11, pp. 89–98. ACM, New York (2011)

  29. Natarajan, B.K.: Sparse approximate solutions to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  30. Needell, D., Tropp, J.: CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  31. NVIDIA: Cuda toolkit 4.0 (2011). http://developer.nvidia.com/cuda-toolkit-40

  32. Tropp, J.A.: Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inform. Theory 50(10), 2231–2242 (2004)

    Article  MathSciNet  Google Scholar 

  33. Tropp, J.A., Gilbert, A.C.: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inform. Theory 53(12), 4655–4666 (2007)

    Article  MathSciNet  Google Scholar 

  34. Wright, S.J., Nowak, R.D., Figueiredo, M.A.T.: Sparse reconstruction by separable approximation. In: Proceedings of the International Conference on Acoustics, Speech, and, Signal Processing (2008)

  35. Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for \(\ell ^1\)-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

We thank Stephen Wright and Shangkyun Lee for allowing inclusion of their \({ dct}\) matrix-vector product code [25]. We also thank Erik Opavsky and Emircan Uysaler, Grinnell College students, for their dense matrix–vector product which is the foundation of our \({ gen}\) matrix–vector product.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jared Tanner.

Additional information

JDB was a National Science Foundation International Research Fellow at the University of Edinburgh under award NSF OISE 0854991 and is supported by NSF DMS 1112612. JT was supported by the Leverhulm Trust and an Nvidia Professor Partnership.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Blanchard, J.D., Tanner, J. GPU accelerated greedy algorithms for compressed sensing. Math. Prog. Comp. 5, 267–304 (2013). https://doi.org/10.1007/s12532-013-0056-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12532-013-0056-5

Keywords

Mathematics Subject Classification

Navigation