Abstract
The linearly constrained matrix rank minimization problem is widely applicable in many fields such as control, signal processing and system identification. The tightest convex relaxation of this problem is the linearly constrained nuclear norm minimization. Although the latter can be cast as a semidefinite programming problem, such an approach is computationally expensive to solve when the matrices are large. In this paper, we propose fixed point and Bregman iterative algorithms for solving the nuclear norm minimization problem and prove convergence of the first of these algorithms. By using a homotopy approach together with an approximate singular value decomposition procedure, we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems (the code can be downloaded from http://www.columbia.edu/~sm2756/FPCA.htm for non-commercial use). Our numerical results on randomly generated and real matrix completion problems demonstrate that this algorithm is much faster and provides much better recoverability than semidefinite programming solvers such as SDPT3. For example, our algorithm can recover 1000 × 1000 matrices of rank 50 with a relative error of 10−5 in about 3 min by sampling only 20% of the elements. We know of no other method that achieves as good recoverability. Numerical experiments on online recommendation, DNA microarray data set and image inpainting problems demonstrate the effectiveness of our algorithms.
Similar content being viewed by others
References
Bach F.R.: Consistency of trace norm minimization. J. Mach. Learn. Res. 9(Jun), 1019–1048 (2008)
Bertalmío, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: Proceedings of SIGGRAPH 2000, New Orleans, USA (2000)
Borwein J.M., Lewis A.S.: Convex Analysis and Nonlinear Optimization. Springer, New York (2003)
Bregman L.: The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7, 200–217 (1967)
Burer S., Monteiro R.D.C.: A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. Math. Program. (Ser. B) 95, 329–357 (2003)
Burer S., Monteiro R.D.C.: Local mimima and convergence in low-rank semidefinite programming. Math. Program. 103(3), 427–444 (2005)
Cai, J., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. Preprint available at http://arxiv.org/abs/0810.3286 (2008)
Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. (2009)
Candès, E.J., Romberg, J.: ℓ 1-MAGIC: recovery of sparse signals via convex programming. Technical Report, Caltech (2005)
Candès E.J., Romberg J., Tao T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)
Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. Preprint available at http://arxiv.org/abs/0903.1476 (2009)
Dai, W., Milenkovic, O.: Subspace pursuit for compressive sensing: closing the gap between performance and complexity. Preprint available at arXiv: 0803.0811 (2008)
Donoho D.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)
Donoho, D.L., Tsaig, Y.: Fast solution of ℓ 1-norm minimization problems when the solution may be sparse. Technical Report, Department of Statistics, Stanford University (2006)
Donoho, D., Tsaig, Y., Drori, I., Starck, J.C.: Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit. IEEE Trans. Inf. Theory (2006) (submitted)
Drineas P., Kannan R., Mahoney M.W.: Fast Monte Carlo algorithms for matrices ii: computing low-rank approximations to a matrix. SIAM J. Comput. 36, 158–183 (2006)
Fazel, M.: Matrix rank minimization with applications. Ph.D. thesis, Stanford University (2002)
Fazel, M., Hindi, H., Boyd, S.: A rank minimization heuristic with application to minimum order system approximation. In: Proceedings of the American Control Conference, vol. 6, pp. 4734–4739 (2001)
Figueiredo, M.A.T., Nowak, R.D., Wright, S.J.: Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1(4) (2007)
Ghaoui, L.E., Gahinet, P.: Rank minimization under LMI constraints: a framework for output feedback problems. In: Proceedings of the European Control Conference (1993)
Goldberg K., Roeder T., Gupta D., Perkins C.: Eigentaste: a constant time collaborative filtering algorithm. Inf. Retr. 4(2), 133–151 (2001)
Goldfarb, D., Ma, S.: Convergence of fixed point continuation algorithms for matrix rank minimization. Technical Report, Department of IEOR, Columbia University (2009)
Hale, E.T., Yin, W., Zhang, Y.: A fixed-point continuation method for ℓ 1-regularized minimization with applications to compressed sensing. Technical Report, CAAM TR07-07 (2007)
Hiriart-Urruty J.B., Lemaréchal C.: Convex Analysis and Minimization Algorithms II: Advanced Theory and Bundle Methods. Springer, New York (1993)
Horn R.A., Johnson C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1985)
Keshavan, R.H., Montanari, A., Oh, S.: Matrix completion from a few entries. Preprint available at http://arxiv.org/abs/0901.3150 (2009)
Kim S.J., Koh K., Lustig M., Boyd S., Gorinevsky D.: A method for large-scale ℓ 1-regularized least-squares. IEEE J. Sel. Top. Signal Process. 4(1), 606–617 (2007)
Linial N., London E., Rabinovich Y.: The geometry of graphs and some of its algorithmic applications. Combinatorica 15, 215–245 (1995)
Liu, Z., Vandenberghe, L.: Interior-point method for nuclear norm approximation with application to system identification. Preprint available at http://www.ee.ucla.edu/~vandenbe/publications/nucnrm.pdf (2008)
Natarajan B.K.: Sparse approximation solutions to linear systems. SIAM J. Comput. 24(2), 227–234 (1995)
Osher S., Burger M., Goldfarb D., Xu J., Yin W.: An iterative regularization method for total varitaion-based image restoration. SIAM MMS 4(2), 460–489 (2005)
Recht, B., Fazel, M., Parrilo, P.: Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization. Preprint available at http://arxiv.org/abs/0706.4138 (2007)
Rennie, J.D.M., Srebro, N.: Fast maximum margin matrix factorization for collaborative prediction. In: Proceedings of the International Conference of Machine Learning (2005)
Rudin L., Osher S., Fatemi E.: Nonlinear total variation based noise removal algorithms. Physica D 60, 259–268 (1992)
Spellman P.T., Sherlock G., Zhang M.Q., Iyer V.R., Anders K., Eisen M.B., Brown P.O., Botstein D., Futcher B.: Comprehensive identification of cell cycle-regulated genes of the yeast saccharomyces cerevisiae by microarray hybridization. Mol. Biol. Cell 9, 3273–3297 (1998)
Srebro, N.: Learning with matrix factorizations. Ph.D. thesis, Massachusetts Institute of Technology (2004)
Srebro, N., Jaakkola, T.: Weighted low-rank approximations. In: Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003) (2003)
Sturm J.F.: Using SeDuMi 1.02, a Matlab toolbox for optimization over symmetric cones. Opt. Methods Softw. 11(12), 625–653 (1999)
Tibshirani R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 58, 267–288 (1996)
Tropp J.: Just relax: convex programming methods for identifying sparse signals. IEEE Trans. Inf. Theory 51, 1030–1051 (2006)
Troyanskaya O., Cantor M., Sherlock G., Brown P., Hastie T., Tibshirani R., Botstein D., Altman R.B.: Missing value estimation methods for DNA microarrays. Bioinformatics 17(6), 520–525 (2001)
Tütüncü R.H., Toh K.C., Todd M.J.: Solving semidefinite-quadratic-linear programs using SDPT3. Math. Program. Ser. B 95, 189–217 (2003)
van den Berg E., Friedlander M.P.: Probing the Pareto frontier for basis pursuit solutions. SIAM J. Sci. Comput. 31(2), 890–912 (2008)
Wen, Z., Yin, W., Goldfarb, D., Zhang, Y.: A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation. Technical Report, Department of IEOR, Columbia University (2009)
Yin W., Osher S., Goldfarb D., Darbon J.: Bregman iterative algorithms for ℓ 1-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008)
Author information
Authors and Affiliations
Corresponding author
Additional information
Research supported in part by NSF Grant DMS 06-06712, ONR Grants N00014-03-0514 and N00014-08-1-1118, and DOE Grants DE-FG01-92ER-25126 and DE-FG02-08ER-58562.
Rights and permissions
About this article
Cite this article
Ma, S., Goldfarb, D. & Chen, L. Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program. 128, 321–353 (2011). https://doi.org/10.1007/s10107-009-0306-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107-009-0306-5
Keywords
- Matrix rank minimization
- Matrix completion problem
- Nuclear norm minimization
- Fixed point iterative method
- Bregman distances
- Singular value decomposition