Abstract
Convex optimization has become an essential technique in many different disciplines. In this paper, we consider the primal-dual algorithm minimizing augmented models with linear constraints, where the objective function consists of two proper closed convex functions; one is the square of a norm and the other one is a gauge function being partly smooth relatively to an active manifold. Examples of this situation can be found in signal processing, optimization, statistics and machine learning literature. We present a unified framework to understand the local convergence behaviour of the primal-dual algorithm for these augmented models. This result explains some numerical results shown in the literature on local linear convergence of the algorithm.
Similar content being viewed by others
References
Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1998)
Elad, M., Milanfar, P., Rubinstein, R.: Analysis versus synthesis in signal priors. Inverse Probl. 23(3), 947 (2007)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (2015)
Friedlander, M.P., Tseng, P.: Exact regularization of convex programs. SIAM J. Optim. 18(4), 1326–1350 (2008)
Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for \(\ell _1\)-minimization: methodology and convergence. SIAM J. Optim. 19(3), 1107–1130 (2008)
Hare, W., Lewis, A.: Identifying active constraints via partial smoothness and prox-regularity. J. Conv. Anal. 11, 251–266 (2004)
Huang, B., Ma, S., Goldfarb, D.: Accelerated linearized Bregman method. J. Sci. Comput. 54(2–3), 428–453 (2013)
Lai, M.J., Yin, W.: Augmented \(\ell _1\) and nuclear-norm models with a globally linearly convergent algorithm. SIAM J. Imaging Sci. 6(2), 1059–1091 (2013)
Lewis, A.S.: Active sets, nonsmoothness, and sensitivity. SIAM J. Optim. 13(3), 702–725 (2002)
Lewis, A.S., Malick, J.: Alternating projections on manifolds. Math. Oper. Res. 33(1), 216–234 (2008)
Li, Q., Micchelli, C.A., Shen, L., Xu, Y.: A proximity algorithm accelerated by Gauss–Seidel iterations for L1/TV denoising models. Inverse Probl. 28(9), 095003 (2012)
Liang, J., Fadili, J., Peyré, G.: Local linear convergence of forward–backward under partial smoothness. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., Weinberger, K. (eds.) Advances in Neural Information Processing Systems 27, pp. 1970–1978. Curran Associates Inc, NewYork (2014)
Liang, J., Fadili, J., Peyré, G., Luke, R.: Activity identification and local linear convergence of Douglas–Rachford/ADMM under partial smoothness. In: Aujol, J.F., Nikolova, M., Papadakis, N. (eds.) Scale Space and Variational Methods in Computer Vision. Lecture Notes in Computer Science, vol. 9087, pp. 642–653. Springer, NewYork (2015)
Micchelli, C.A., Shen, L., Xu, Y., Zeng, X.: Proximity algorithms for the L1/TV image denoising model. Adv. Comput. Math. 38(2), 401–426 (2013)
Osher, S., Mao, Y., Dong, B., Yin, W.: Fast linearized Bregman iteration for compressive sensing and sparse denoising. Commun. Math. Sci. 8(1), 93–111 (2010)
Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1(3), 127–239 (2014)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (2015)
Rockafellar, R.T., Wets, R.J.B.: Variational Analysis. Springer, Berlin (1998)
Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 60(14), 259–268 (1992)
Vaiter, S., Peyré, G., & Fadili, J.: Low complexity regularization of linear inverse problems. In: Pfander G.E. (ed) Sampling Theory, a Renaissance, pp. 103–153. Springer International Publishing (2015)
Yin, W.: Analysis and generalizations of the linearized Bregman method. SIAM J. Imaging Sci. 3(4), 856–877 (2010)
Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for \(\ell _1\)-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008)
Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B (Statistical Methodology) 68(1), 49–67 (2006)
Zhang, H., Cheng, L., Yin, W.: A dual algorithm for a class of augmented convex signal recovery models. Commun. Math. Sci. 13(1), 103–112 (2015)
Acknowledgments
The authors are really indebted to the editors and anonymous referees for their useful suggestions and help to improve the quality of the manuscript. H.J. and L.C. have been supported during this research by the National Natural Science Foundation of Hunan Province, China (13JJ2001), the Science Project of National University of Defense Technology (JC120201) and the National Science Foundation of China (No. 61402495). R.B. has been supported during this research by the Spanish Research Projects MTM2012-31883 and MTM2015-64095-P, the University of Zaragoza/CUD Project UZCUD2015-CIE-05 and the European Social Fund and Diputación General de Aragón (Grant E48).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Sun, T., Barrio, R., Jiang, H. et al. Local Linear Convergence of a Primal-Dual Algorithm for the Augmented Convex Models. J Sci Comput 69, 1301–1315 (2016). https://doi.org/10.1007/s10915-016-0235-4
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10915-016-0235-4
Keywords
- Convex optimization
- Primal-dual algorithm
- Augmented convex model
- Partial smoothness
- Local linear convergence