Abstract
The problem of recovering the low-rank and sparse components of a matrix is known as the stable principal component pursuit (SPCP) problem. It has found many applications in compressed sensing, image processing, and web data ranking. This paper proposes a generalized inexact Uzawa method for SPCP with nonnegative constraints. The main advantage of our method is that the resulting subproblems all have closed-form solutions and can be executed in distributed manners. Global convergence of the method is proved from variational inequalities perspective. Numerical experiments show that our algorithm converges to the optimal solution as other distributed methods, with better performances.
Similar content being viewed by others
References
Aybat, N.S.: Algorithms for stable PCA. Handbook of Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing (2016)
Aybat, N.S., Iyengar, G.: A unified approach for minimizing composite norms. Math. Program. 144(1–2), 181–226 (2014)
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)
Bramble, J.H., Pasciak, J.E., Vassilev, A.T.: Analysis of the inexact uzawa algorithm for saddle point problems. SIAM J. Numer. Anal. 34(3), 1072–1092 (1997)
Cai, J.F., Cand, S.E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)
Cai, X., Han, D., Yuan, X.: On the convergence of the direct extension of ADMM for three-block separable convex minimization models with one strongly convex function. Comput. Optim. Appl. 66(1), 39–73 (2017)
Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM 58(3), 11 (2011)
Chandrasekaran, V., Sanghavi, S., Parrilo, P.A., Willsky, A.S.: Rank-sparsity incoherence for matrix decomposition. SIAM J. Optim. 21(2), 572–596 (2011)
Chen, C., He, B., Ye, Y., Yuan, X.: The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent. Math. Program. 155(1–2), 57–79 (2016)
Deng, W., Yin, W.: On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 66(3), 889–916 (2016)
Eckstein, J., Bertsekas, D.P.: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55 (1–3), 293–318 (1992)
Eckstein, J., Yao, W.: Understanding the convergence of the alternating direction method of multipliers: Theoretical and computational perspectives. Pac. J. Optim. 11, 619–644 (2014)
Fortin, M., Glowinski, R.: Augmented Lagrangian methods: Applications to the numerical solution of boundary-value problems. Elsevier (2000)
Gabay, D.: Applications of the method of multipliers to variational inequalities. In: Fortin, M., Glowinski, R. (eds.) Augmented Lagrangian methods: applications to the numerical solution of boundary-value problems, Studies in Mathematics and its Applications, vol. 15, pp 299–331. Elsevier, (1983)
Glowinski, R.: On alternating direction methods of multipliers: a historical perspective. In: Modeling, Simulation and Optimization for Science and Technology, pp. 59–82. Springer (2014)
Glowinski, R., Marroco, A.: Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de dirichlet non linéaires. Revue franċaise d’automatique, informatique, recherche opérationnelle. Analyse numérique 9(2), 41–76 (1975)
Han, D., Yuan, X.: A note on the alternating direction method of multipliers. J. Optim. Theory Appl. 155(1), 227–238 (2012)
He, B., Ma, F., Yuan, X.: Convergence study on the symmetric version of ADMM with larger step sizes. SIAM J. Imag. Sci. 9(3), 1467–1501 (2016)
He, B., Tao, M., Yuan, X.: Convergence rate analysis for the alternating direction method of multipliers with a substitution procedure for separable convex programming. Math. Oper. Res. doi:10.1287/moor.2016.0822 (2017)
He, B., Yuan, X.: On the O(1/n) convergence rate of the Douglas-Rachford alternating direction method. SIAM J. Numer. Anal. 50(2), 700–709 (2012)
He, B., Yuan, X.: On the direct extension of ADMM for multi-block separable convex programming and beyond: From variational inequality perspective. Manuscript optimization online (2014)
He, B., Yuan, X.: On non-ergodic convergence rate of Douglas-Rachford alternating direction method of multipliers. Numer. Math. 130(3), 567–577 (2015)
Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4 (5), 303–320 (1969)
Hong, M., Luo, Z.Q.: On the linear convergence of the alternating direction method of multipliers. Math. Program. 162(1), 165–199 (2017)
Hu, Q., Zou, J.: An iterative method with variable relaxation parameters for saddle-point problems. SIAM J. Matrix Anal. Appl. 23(2), 317–338 (2001)
Hu, Q., Zou, J.: Two new variants of nonlinear inexact Uzawa algorithms for saddle-point problems. Numer. Math. 93(2), 333–359 (2002)
Hu, Q., Zou, J.: Nonlinear inexact Uzawa algorithms for linear and nonlinear saddle-point problems. SIAM J. Optim. 16(3), 798–825 (2006)
Li, L., Huang, W., Gu, I.Y.H., Tian, Q.: Statistical modeling of complex backgrounds for foreground object detection. IEEE Trans Image Process 13 (11), 1459–1472 (2004)
Li, M., Sun, D., Toh, K.C.: A convergent 3-block semi-proximal ADMM for convex minimization problems with one strongly convex block. Asia-Pac J Oper Res 32(04), 1550,024 (2015)
Lin, T., Ma, S., Zhang, S.: On the global linear convergence of the ADMM with multiblock variables. SIAM J. Optim. 25(3), 1478–1497 (2015)
Lin, T., Ma, S., Zhang, S.: On the sublinear convergence rate of multi-block ADMM. J. Oper. Res. Soc. Chin. 3(3), 251–274 (2015)
Lin, Z., Chen, M., Ma, Y.: The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv preprint arXiv:1009.5055 (2010)
Lin, Z., Ganesh, A., Wright, J., Wu, L., Chen, M., Ma, Y.: Fast convex optimization algorithms for exact recovery of a corrupted low-rank matrix. Comput. Adv. Multi-Sensor Adapt. Process. 61 (2009)
Ma, S.: Alternating proximal gradient method for convex minimization. J. Sci. Comput. 68(2), 546–572 (2016)
Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1(3), 123–231 (2013)
Powell, M.J.: A method for non-linear constraints in minimization problems. UKAEA (1967)
Tao, M., Yuan, X.: Recovering low-rank and sparse components of matrices from incomplete and noisy observations. SIAM J. Optim. 21(1), 57–81 (2011)
Wright, J., Ganesh, A., Rao, S., Peng, Y., Ma, Y.: Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In: Advances in neural information processing systems, pp. 2080–2088 (2009)
Yang, J., Yuan, X.: Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Math. Comput. 82(281), 301–329 (2013)
Yuan, X., Yang, J.: Sparse and low-rank matrix decomposition via alternating direction methods. Pac. J. Optim. 9(1), 167–180 (2009)
Zhou, Z., Li, X., Wright, J., Candes, E., Ma, Y.: Stable principal component pursuit. In: 2010 IEEE International Symposium on Information Theory Proceedings (ISIT), pp. 1518–1522. IEEE (2010)
Acknowledgements
The work is supported in part by the Natural Science Foundation of China Grant 71401176 and the Natural Science Foundation of Jiangsu Province Grant BK20140065.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Huai, K., Ni, M., Yu, Z. et al. A generalized inexact Uzawa method for stable principal component pursuit problem with nonnegative constraints. Numer Algor 77, 653–674 (2018). https://doi.org/10.1007/s11075-017-0333-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-017-0333-x