Skip to main content
Log in

The generalized proximal point algorithm with step size 2 is not necessarily convergent

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

The proximal point algorithm (PPA) is a fundamental method in optimization and it has been well studied in the literature. Recently a generalized version of the PPA with a step size in (0, 2) has been proposed. Inheriting all important theoretical properties of the original PPA, the generalized PPA has some numerical advantages that have been well verified in the literature by various applications. A common sense is that larger step sizes are preferred whenever the convergence can be theoretically ensured; thus it is interesting to know whether or not the step size of the generalized PPA can be as large as 2. We give a negative answer to this question. Some counterexamples are constructed to illustrate the divergence of the generalized PPA with step size 2 in both generic and specific settings, including the generalized versions of the very popular augmented Lagrangian method and the alternating direction method of multipliers. A by-product of our analysis is the failure of convergence of the Peaceman–Rachford splitting method and a generalized version of the forward–backward splitting method with step size 1.5.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Note that it is even possible to consider \(\gamma >2\) in (4); but this case usually requires much stronger assumptions on T to ensure the convergence, see, e.g., [7] for details. We thus do not consider the case of \(\gamma >2\) in this note.

References

  1. Bai, Z.Z., Hadjidimos, A.: Optimization of extrapolated Cayley transform with non-Hermitian positive definite matrix. Linear Algebra Appl. 463, 322–339 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011)

    Book  MATH  Google Scholar 

  3. Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York (1982)

    MATH  Google Scholar 

  4. Cayley, A.: On the theory of linear transformations. Camb. J. Math. 4, 1–16 (1845)

    MATH  Google Scholar 

  5. Chen, G.H.-G., Rockafellar, R.T.: Convergence rate in forward–backward splitting. SIAM J. Optim. 7, 1–25 (1997)

    Article  MathSciNet  Google Scholar 

  6. Combetters, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  7. Corman, E., Yuan, X.M.: A generalized proximal point algorithm and its convergence rate. SIAM J. Optim. 24, 1614–1638 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  8. Davis, D., Yin, W.: Convergence rate analysis of several splitting schemes. http://arxiv.org/pdf/1406.4834.pdf

  9. Douglas, J., Rachford, H.H.: On the numerical solution of the heat conduction problem in 2 and 3 space variables. Trans. Am. Math. Soc. 82, 421–439 (1956)

    Article  MathSciNet  MATH  Google Scholar 

  10. Eckstein, J.: Splitting methods for monotone operators with applications to parallel optimization. Ph.D. Dissertation, MIT, Cambridge (1989)

  11. Eckstein, J.: Parallel alternating direction multiplier decomposition of convex programs. J. Optim. Theory Appl. 80, 39–62 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  12. Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal points algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  13. Fang, E.X., Liu, H., He, B.S., Yuan, X.M.: The generalized alternating direction method of multipliers: new theoretical insights and applications. Math. Prog. Comput. 7, 149–187 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  14. Gabay, D.: Applications of the method of multipliers to variational inequalities. In: Fortin, M., Glowinski, R. (eds.) Augmented Lagrange Methods: Applications to the Solution of Boundary-valued Problems, pp. 299–331. North Holland, Amsterdam (1983)

    Chapter  Google Scholar 

  15. Glowinski, R., Kärkkäinen, T., Majava, K.: On the convergence of operator-splitting methods. In: Kuznetsov, Y., Neittanmaki, P., Pironneau, O. (eds.) Numerical Methods for Scienfic Computing, Variational Problems and Applications, pp. 67–79. CIMNE, Barcelona (2003)

    Google Scholar 

  16. Glowinski, R., Marrocco, A.: Approximation par \(\acute{e}\)l\(\acute{e}\)ments finis d’ordre un et r\(\acute{e}\)solution par p\(\acute{e}\)nalisation-dualit\(\acute{e}\) d’une classe de probl\(\grave{e}\)mes non lin\(\acute{e}\)aires. RAIRO R2, 41–76 (1975)

    Google Scholar 

  17. Gol’shtein, E.G., Tret’yakov, N.V.: Modified Lagrangian in convex programming and their generalizations. Math. Progr. Study 10, 86–97 (1979)

    Article  MathSciNet  Google Scholar 

  18. Güler, O.: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Optim. 1, 403–419 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  19. Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4, 303–320 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  20. Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Ekon. i Mat. Metody 12, 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

  21. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  22. Martinet, B.: Regularization d’inequations variationelles par approximations successives. Rev. Fr. d’Inf. et de Rech. Opér. 4, 154–159 (1970)

    MATH  Google Scholar 

  23. Moreau, J.J.: Proximité et dualit ’e dans un espace Hilbertien. Bull. Soc. Math. Fr. 93, 273–299 (1965)

    Article  MATH  Google Scholar 

  24. Nemirovski, A.: Prox-method with rate of convergence \(O(1/t)\) for variational inequality with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15, 229–251 (2005)

    Article  MATH  Google Scholar 

  25. Passty, G.B.: Ergodic convergence to a zero of the sume of monotone operators in Hilbert space. J. Math. Anal. Appl. 72, 383–390 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  26. Peaceman, D.H., Rachford, H.H.: The numerical solution of parabolic elliptic differential equations. J. Soc. Ind. Appl. Math. 3, 28–41 (1955)

    Article  MathSciNet  MATH  Google Scholar 

  27. Polyak, B.T.: Introduction to Optimization. Translations Series in Mathematics and Engineering, Optimization Software. Publications Division, New York (1987)

    MATH  Google Scholar 

  28. Powell, M.J.D.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed.) Optimization, pp. 283–298. Academic Press, New York (1969)

    Google Scholar 

  29. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  MATH  Google Scholar 

  30. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 97–116 (1976)

    MathSciNet  MATH  Google Scholar 

  31. Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1, 877–898 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  32. Shen, L., Pan, S.H.: Linear convergence of the generalized PPA and several splitting methods for the composite inclusion problem. http://arxiv.org/abs/1508.05156v2

  33. Teboulle, M.: Convergence of proximal-like algorithms. SIAM J. Optim. 7, 1069–1083 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  34. Tao, M., Yuan, X.M.: On the optimal linear convergence rate of a generalized proximal point algorithm. J. Sci. Comput. 74, 826–850 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  35. Wen, Z., Goldfarb, D., Yin, W.: Alternating direction augmented Lagrangian methods for semidefinite programming. Math. Progr. Comput. 2, 203–230 (2010)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Min Tao.

Additional information

Min Tao was supported by the Natural Science Foundation of China: NSFC-11301280 and the sponsorship of Jiangsu overseas research and training program for university prominent young and middle-aged teachers and presidents. Xiaoming Yuan was supported by the General Research Fund from Hong Kong Research Grants Council: HKBU 12313516.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tao, M., Yuan, X. The generalized proximal point algorithm with step size 2 is not necessarily convergent. Comput Optim Appl 70, 827–839 (2018). https://doi.org/10.1007/s10589-018-9992-3

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-018-9992-3

Keywords

Mathematics Subject Classification

Navigation