Skip to main content
Log in

Fixed point and Bregman iterative methods for matrix rank minimization

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast as a semidefinite programming problem, such an approach is computationally expensive to solve when the matrices are large. In this paper, we propose fixed point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems (the code can be downloaded from http://www.columbia.edu/~sm2756/FPCA.htm for non-commercial use). Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10−5 in about 3 min by sampling only 20% of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Bach F.R.: Consistency of trace norm minimization. J. Mach. Learn. Res. 9(Jun), 1019–1048 (2008)

    MathSciNet  Google Scholar 

  2. Bertalmío, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: Proceedings of SIGGRAPH 2000, New Orleans, USA (2000)

  3. Borwein J.M., Lewis A.S.: Convex Analysis and Nonlinear Optimization. Springer, New York (2003)

    Google Scholar 

  4. Bregman L.: The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7, 200–217 (1967)

    Article  Google Scholar 

  5. Burer S., Monteiro R.D.C.: A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. Math. Program. (Ser. B) 95, 329–357 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  6. Burer S., Monteiro R.D.C.: Local mimima and convergence in low-rank semidefinite programming. Math. Program. 103(3), 427–444 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  7. Cai, J., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. Preprint available at http://arxiv.org/abs/0810.3286 (2008)

  8. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. (2009)

  9. Candès, E.J., Romberg, J.: 1-MAGIC: recovery of sparse signals via convex programming. Technical Report, Caltech (2005)

  10. Candès E.J., Romberg J., Tao T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)

    Article  Google Scholar 

  11. Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. Preprint available at http://arxiv.org/abs/0903.1476 (2009)

  12. Dai, W., Milenkovic, O.: Subspace pursuit for compressive sensing: closing the gap between performance and complexity. Preprint available at arXiv: 0803.0811 (2008)

  13. Donoho D.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)

    Article  MathSciNet  Google Scholar 

  14. Donoho, D.L., Tsaig, Y.: Fast solution of 1-norm minimization problems when the solution may be sparse. Technical Report, Department of Statistics, Stanford University (2006)

  15. Donoho, D., Tsaig, Y., Drori, I., Starck, J.C.: Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit. IEEE Trans. Inf. Theory (2006) (submitted)

  16. Drineas P., Kannan R., Mahoney M.W.: Fast Monte Carlo algorithms for matrices ii: computing low-rank approximations to a matrix. SIAM J. Comput. 36, 158–183 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  17. Fazel, M.: Matrix rank minimization with applications. Ph.D. thesis, Stanford University (2002)

  18. Fazel, M., Hindi, H., Boyd, S.: A rank minimization heuristic with application to minimum order system approximation. In: Proceedings of the American Control Conference, vol. 6, pp. 4734–4739 (2001)

  19. Figueiredo, M.A.T., Nowak, R.D., Wright, S.J.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1(4) (2007)

  20. Ghaoui, L.E., Gahinet, P.: Rank minimization under LMI constraints: a framework for output feedback problems. In: Proceedings of the European Control Conference (1993)

  21. Goldberg K., Roeder T., Gupta D., Perkins C.: Eigentaste: a constant time collaborative filtering algorithm. Inf. Retr. 4(2), 133–151 (2001)

    Article  MATH  Google Scholar 

  22. Goldfarb, D., Ma, S.: Convergence of fixed point continuation algorithms for matrix rank minimization. Technical Report, Department of IEOR, Columbia University (2009)

  23. Hale, E.T., Yin, W., Zhang, Y.: A fixed-point continuation method for 1-regularized minimization with applications to compressed sensing. Technical Report, CAAM TR07-07 (2007)

  24. Hiriart-Urruty J.B., Lemaréchal C.: Convex Analysis and Minimization Algorithms II: Advanced Theory and Bundle Methods. Springer, New York (1993)

    MATH  Google Scholar 

  25. Horn R.A., Johnson C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1985)

    MATH  Google Scholar 

  26. Keshavan, R.H., Montanari, A., Oh, S.: Matrix completion from a few entries. Preprint available at http://arxiv.org/abs/0901.3150 (2009)

  27. Kim S.J., Koh K., Lustig M., Boyd S., Gorinevsky D.: A method for large-scale 1-regularized least-squares. IEEE J. Sel. Top. Signal Process. 4(1), 606–617 (2007)

    Article  Google Scholar 

  28. Linial N., London E., Rabinovich Y.: The geometry of graphs and some of its algorithmic applications. Combinatorica 15, 215–245 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  29. Liu, Z., Vandenberghe, L.: Interior-point method for nuclear norm approximation with application to system identification. Preprint available at http://www.ee.ucla.edu/~vandenbe/publications/nucnrm.pdf (2008)

  30. Natarajan B.K.: Sparse approximation solutions to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  31. Osher S., Burger M., Goldfarb D., Xu J., Yin W.: An iterative regularization method for total varitaion-based image restoration. SIAM MMS 4(2), 460–489 (2005)

    MathSciNet  MATH  Google Scholar 

  32. Recht, B., Fazel, M., Parrilo, P.: Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization. Preprint available at http://arxiv.org/abs/0706.4138 (2007)

  33. Rennie, J.D.M., Srebro, N.: Fast maximum margin matrix factorization for collaborative prediction. In: Proceedings of the International Conference of Machine Learning (2005)

  34. Rudin L., Osher S., Fatemi E.: Nonlinear total variation based noise removal algorithms. Physica D 60, 259–268 (1992)

    Article  MATH  Google Scholar 

  35. Spellman P.T., Sherlock G., Zhang M.Q., Iyer V.R., Anders K., Eisen M.B., Brown P.O., Botstein D., Futcher B.: Comprehensive identification of cell cycle-regulated genes of the yeast saccharomyces cerevisiae by microarray hybridization. Mol. Biol. Cell 9, 3273–3297 (1998)

    Google Scholar 

  36. Srebro, N.: Learning with matrix factorizations. Ph.D. thesis, Massachusetts Institute of Technology (2004)

  37. Srebro, N., Jaakkola, T.: Weighted low-rank approximations. In: Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003) (2003)

  38. Sturm J.F.: Using SeDuMi 1.02, a Matlab toolbox for optimization over symmetric cones. Opt. Methods Softw. 11(12), 625–653 (1999)

    Article  MathSciNet  Google Scholar 

  39. Tibshirani R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 58, 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  40. Tropp J.: Just relax: convex programming methods for identifying sparse signals. IEEE Trans. Inf. Theory 51, 1030–1051 (2006)

    Article  MathSciNet  Google Scholar 

  41. Troyanskaya O., Cantor M., Sherlock G., Brown P., Hastie T., Tibshirani R., Botstein D., Altman R.B.: Missing value estimation methods for DNA microarrays. Bioinformatics 17(6), 520–525 (2001)

    Article  Google Scholar 

  42. Tütüncü R.H., Toh K.C., Todd M.J.: Solving semidefinite-quadratic-linear programs using SDPT3. Math. Program. Ser. B 95, 189–217 (2003)

    Article  MATH  Google Scholar 

  43. van den Berg E., Friedlander M.P.: Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890–912 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  44. Wen, Z., Yin, W., Goldfarb, D., Zhang, Y.: A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation. Technical Report, Department of IEOR, Columbia University (2009)

  45. Yin W., Osher S., Goldfarb D., Darbon J.: Bregman iterative algorithms for 1-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shiqian Ma.

Additional information

Research supported in part by NSF Grant DMS 06-06712, ONR Grants N00014-03-0514 and N00014-08-1-1118, and DOE Grants DE-FG01-92ER-25126 and DE-FG02-08ER-58562.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ma, S., Goldfarb, D. & Chen, L. Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program. 128, 321–353 (2011). https://doi.org/10.1007/s10107-009-0306-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-009-0306-5

Keywords

Mathematics Subject Classification (2000)

Navigation