Abstract
We propose two approaches to solve large-scale compressed sensing problems. The first approach uses the parametric simplex method to recover very sparse signals by taking a small number of simplex pivots, while the second approach reformulates the problem using Kronecker products to achieve faster computation via a sparser problem formulation. In particular, we focus on the computational aspects of these methods in compressed sensing. For the first approach, if the true signal is very sparse and we initialize our solution to be the zero vector, then a customized parametric simplex method usually takes a small number of iterations to converge. Our numerical studies show that this approach is 10 times faster than state-of-the-art methods for recovering very sparse signals. The second approach can be used when the sensing matrix is the Kronecker product of two smaller matrices. We show that the best-known sufficient condition for the Kronecker compressed sensing (KCS) strategy to obtain a perfect recovery is more restrictive than the corresponding condition if using the first approach. However, KCS can be formulated as a linear program with a very sparse constraint matrix, whereas the first approach involves a completely dense constraint matrix. Hence, algorithms that benefit from sparse problem representation, such as interior point methods (IPMs), are expected to have computational advantages for the KCS problem. We numerically demonstrate that KCS combined with IPMs is up to 10 times faster than vanilla IPMs and state-of-the-art methods such as \(\ell _1\_\ell _s\) and Mirror Prox regardless of the sparsity level or problem size.



Similar content being viewed by others
References
Adler, I., Karp, R.M., Shamir, R.: A simplex variant solving an \(m\times d\) linear program in \({O}(\min (m_2, d_2)\) expected number of pivot steps. J. Complex. 3, 372–387 (1987)
Belloni, A., Chernozhukov, V.: \(\ell _1\)-penalized quantile regression in high-dimensional sparse models. Ann. Stat. 39, 82–130 (2011)
Cai, T.T., Zhang, A.: Sharp RIP bound for sparse signal and low-rank matrix recovery. Appl. Comput. Harmonic Anal. 35, 74–93 (2012)
Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)
Candès, E.J.: The restricted isometry property and its implications for compressed sensing. C. R. Math. 346, 589–592 (2008)
Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20, 33–61 (1998)
Cohen, A., Dahmen, W., Devore, R.: Compressed sensing and best \(k\)-term approximation. J. Am. Math. Soc. 22, 211–231 (2009)
Dantzig, G.B.: Linear Programming and Extensions. Princeton University Press, Princeton (1998)
Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)
Donoho, D.L., Elad, M.: Optimally sparse representation in general (nonorthogonal) dictionaries via \(\ell _1\)-minimization. Proc. Natl. Acad. Sci USA 100, 2197–2202 (2003)
Donoho, D.L., Elad, M., Temlyakov, V.N.: Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inf. Theory 52, 6–18 (2006)
Donoho, D.L., Huo, X.: Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inf. Theory 47, 2845–2862 (2001)
Donoho, D.L., Maleki, A., Montanari, A.: Message passing algorithms for compressed sensing. Proc. Natl. Acad. Sci. USA 106, 18914–18919 (2009)
Donoho, D.L., Stark, P.B.: Uncertainty principles and signal recovery. SIAM J. Appl. Math. 49, 906–931 (1989)
Donoho, D.L., Tanner, J.: Neighborliness of randomly projected simplices in high dimensions. Proc. Natl. Acad. Sci. 102, 9452–9457 (2005)
Donoho, D. L., Tanner, J.: Sparse nonnegative solutions of underdetermined linear equations by linear programming. Proc. Natl. Acad. Sci. 102, 9446–9451 (2005)
Donoho, D.L., Tanner, J.: Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing. Philos. Trans. Roy. Soc. S. A 367, 4273–4273 (2009)
Duarte, M.F., Baraniuk, R.G.: Kronecker compressive sensing. IEEE Trans. Image Process. 21, 494–504 (2012)
Elad, M.: Sparse and Redundant Representations—From Theory to Applications in Signal and Image Processing. Springer, New York (2010)
Figueiredo, M., Nowak, R., Wright, S.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1, 586–597 (2008)
Forrest, J.J., Goldfarb, D.: Steepest-edge simplex algorithms for linear programming. Math. Program. 57, 341–374 (1992)
Foucart, S.: Hard thresholding pursuit: an algorithm for compressive sensing. SIAM J. Numer. Anal. 49, 2543–2563 (2011)
Foucart, S., Rauhut, H.: A Mathematical Introduction to Compressive Sensing. Springer, New York (2013)
Gilbert, A.C., Strauss, M.J., Tropp, J.A., Vershynin, R.: One sketch for all: fast algorithms for compressed sensing. In: Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, pp. 237–246. ACM, New York (2007)
Gill, P.E., Murray, W., Ponceleon, D.B., Saunders, M.A.: Solving reduced KKT systems in barrier methods for linear and quadratic programming. Tech. rep, DTIC Document (1991)
Iwen, M.A.: Combinatorial sublinear-time Fourier algorithms. Found. Comut. Math. 10, 303–338 (2010)
Juditsky, A., Karzan, F.K., Nemirovski, A.: \(\ell _1\) minimization via randomized first order algorithms. Université Joseph Fourier, Tech. rep. (2014)
Kim, S., Koh, K., Lustig, M., Boyd, S., Gorinevsky, D.: An interior-point method for large-scale \(l_1\)-regularized least squares. IEEE Trans. Sel. Top. Signal Process. 1, 606–617 (2007)
Klee, V., Minty, G. J.: How good is the simplex method? Inequalities-III, pp. 159–175 (1972)
Kutyniok, G.: Compressed sensing: theory and applications. CoRR . arXiv:1203.3815 (2012)
Lustig, I.J., Mulvey, J.M., Carpenter, T.J.: Formulating two-stage stochastic programs for interior point methods. Oper. Res. 39, 757–770 (1991)
Mallat, S., Zhang, Z.: Matching pursuits with time-frequency dictionaries. Signal Process. IEEE Trans. 41, 3397–3415 (1993)
Needell, D., Tropp, J.A.: CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Commun. ACM 53, 93–100 (2010)
Needell, D., Vershynin, R.: Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit. Found. Comut. Math. 9, 317–334 (2009)
Pan, P.-Q.: A largest-distance pivot rule for the simplex algorithm. Eur. J. Oper. Res. 187, 393–402 (2008)
Post, I., Ye, Y.: The simplex method is strongly polynomial for deterministic markov decision processes. Math. Oper. Res. 40, 859–868 (2015)
Spielman, D.A., Teng, S.-H.: Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time. J. ACM (JACM) 51, 385–463 (2004)
Tropp, J.A.: Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50, 2231–2242 (2004)
Vanderbei, R.: Splitting dense columns in sparse linear systems. Linear Algebra Appl. 152, 107–117 (1991)
Vanderbei, R.: LOQO: an interior point code for quadratic programming. Optim. Methods Softw. 12, 451–484 (1999)
Vanderbei, R.: Linear Programming: Foundations and Extensions, 3rd edn. Springer, New York (2007)
Vanderbei, R.J.: Alpo: another linear program optimizer. ORSA J. Comput. 5, 134–146 (1993)
Vanderbei, R. J.: Linear programming. Foundations and extensions, International Series in Operations Research & Management Science, vol. 37 (2001)
Vanderbei, R.J.: Fast Fourier optimization. Math. Prog. Comp. 4, 1–17 (2012)
Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for \(\ell _1\)-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1, 143–168 (2008)
Acknowledgments
The authors would like to offer their sincerest thanks to the referees and the editors all of whom read earlier versions of the paper very carefully and made many excellent suggestions on how to improve it.
Author information
Authors and Affiliations
Corresponding author
Additional information
The first author’s research is supported by ONR Award N00014-13-1-0093, the third author’s by NSF Grant III–1116730, and the fourth author’s by NSF Grant DMS-1005539.
Rights and permissions
About this article
Cite this article
Vanderbei, R., Lin, K., Liu, H. et al. Revisiting compressed sensing: exploiting the efficiency of simplex and sparsification methods. Math. Prog. Comp. 8, 253–269 (2016). https://doi.org/10.1007/s12532-016-0105-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12532-016-0105-y