Skip to main content
Log in

Local Linear Convergence of a Primal-Dual Algorithm for the Augmented Convex Models

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

Convex optimization has become an essential technique in many different disciplines. In this paper, we consider the primal-dual algorithm minimizing augmented models with linear constraints, where the objective function consists of two proper closed convex functions; one is the square of a norm and the other one is a gauge function being partly smooth relatively to an active manifold. Examples of this situation can be found in signal processing, optimization, statistics and machine learning literature. We present a unified framework to understand the local convergence behaviour of the primal-dual algorithm for these augmented models. This result explains some numerical results shown in the literature on local linear convergence of the algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  2. Elad, M., Milanfar, P., Rubinstein, R.: Analysis versus synthesis in signal priors. Inverse Probl. 23(3), 947 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  3. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (2015)

    MATH  Google Scholar 

  4. Friedlander, M.P., Tseng, P.: Exact regularization of convex programs. SIAM J. Optim. 18(4), 1326–1350 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  5. Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for \(\ell _1\)-minimization: methodology and convergence. SIAM J. Optim. 19(3), 1107–1130 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  6. Hare, W., Lewis, A.: Identifying active constraints via partial smoothness and prox-regularity. J. Conv. Anal. 11, 251–266 (2004)

    MathSciNet  MATH  Google Scholar 

  7. Huang, B., Ma, S., Goldfarb, D.: Accelerated linearized Bregman method. J. Sci. Comput. 54(2–3), 428–453 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  8. Lai, M.J., Yin, W.: Augmented \(\ell _1\) and nuclear-norm models with a globally linearly convergent algorithm. SIAM J. Imaging Sci. 6(2), 1059–1091 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  9. Lewis, A.S.: Active sets, nonsmoothness, and sensitivity. SIAM J. Optim. 13(3), 702–725 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  10. Lewis, A.S., Malick, J.: Alternating projections on manifolds. Math. Oper. Res. 33(1), 216–234 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  11. Li, Q., Micchelli, C.A., Shen, L., Xu, Y.: A proximity algorithm accelerated by Gauss–Seidel iterations for L1/TV denoising models. Inverse Probl. 28(9), 095003 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  12. Liang, J., Fadili, J., Peyré, G.: Local linear convergence of forward–backward under partial smoothness. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., Weinberger, K. (eds.) Advances in Neural Information Processing Systems 27, pp. 1970–1978. Curran Associates Inc, NewYork (2014)

    Google Scholar 

  13. Liang, J., Fadili, J., Peyré, G., Luke, R.: Activity identification and local linear convergence of Douglas–Rachford/ADMM under partial smoothness. In: Aujol, J.F., Nikolova, M., Papadakis, N. (eds.) Scale Space and Variational Methods in Computer Vision. Lecture Notes in Computer Science, vol. 9087, pp. 642–653. Springer, NewYork (2015)

    Google Scholar 

  14. Micchelli, C.A., Shen, L., Xu, Y., Zeng, X.: Proximity algorithms for the L1/TV image denoising model. Adv. Comput. Math. 38(2), 401–426 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  15. Osher, S., Mao, Y., Dong, B., Yin, W.: Fast linearized Bregman iteration for compressive sensing and sparse denoising. Commun. Math. Sci. 8(1), 93–111 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  16. Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim. 1(3), 127–239 (2014)

    Article  Google Scholar 

  17. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (2015)

    MATH  Google Scholar 

  18. Rockafellar, R.T., Wets, R.J.B.: Variational Analysis. Springer, Berlin (1998)

    Book  MATH  Google Scholar 

  19. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 60(14), 259–268 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  20. Vaiter, S., Peyré, G., & Fadili, J.: Low complexity regularization of linear inverse problems. In: Pfander G.E. (ed) Sampling Theory, a Renaissance, pp. 103–153. Springer International Publishing (2015)

  21. Yin, W.: Analysis and generalizations of the linearized Bregman method. SIAM J. Imaging Sci. 3(4), 856–877 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  22. Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for \(\ell _1\)-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  23. Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B (Statistical Methodology) 68(1), 49–67 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  24. Zhang, H., Cheng, L., Yin, W.: A dual algorithm for a class of augmented convex signal recovery models. Commun. Math. Sci. 13(1), 103–112 (2015)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The authors are really indebted to the editors and anonymous referees for their useful suggestions and help to improve the quality of the manuscript. H.J. and L.C. have been supported during this research by the National Natural Science Foundation of Hunan Province, China (13JJ2001), the Science Project of National University of Defense Technology (JC120201) and the National Science Foundation of China (No. 61402495). R.B. has been supported during this research by the Spanish Research Projects MTM2012-31883 and MTM2015-64095-P, the University of Zaragoza/CUD Project UZCUD2015-CIE-05 and the European Social Fund and Diputación General de Aragón (Grant E48).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tao Sun.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, T., Barrio, R., Jiang, H. et al. Local Linear Convergence of a Primal-Dual Algorithm for the Augmented Convex Models. J Sci Comput 69, 1301–1315 (2016). https://doi.org/10.1007/s10915-016-0235-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10915-016-0235-4

Keywords

Mathematics Subject Classification