Skip to main content
Log in

Level-Set Subdifferential Error Bounds and Linear Convergence of Bregman Proximal Gradient Method

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

In this work, we develop a level-set subdifferential error bound condition with an eye toward convergence rate analysis of a variable Bregman proximal gradient (VBPG) method for a broad class of nonsmooth and nonconvex optimization problems. It is proved that the aforementioned condition guarantees linear convergence of VBPG and is weaker than Kurdyka–Łojasiewicz property, weak metric subregularity, and Bregman proximal error bound. Along the way, we are able to derive a number of verifiable conditions for level-set subdifferential error bounds to hold, and necessary conditions and sufficient conditions for linear convergence relative to a level set for nonsmooth and nonconvex optimization problems. The newly established results not only enable us to show that any accumulation point of the sequence generated by VBPG is at least a critical point of the limiting subdifferential or even a critical point of the proximal subdifferential with a fixed Bregman function in each iteration, but also provide a fresh perspective that allows us to explore inner-connections among many known sufficient conditions for linear convergence of various first-order methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. This basic question was partially answered in the convex case by introducing a Bregman distance growth condition (see [39]).

References

  1. Aussel, D., Daniilidis, A., Thibault, A.D.: Subsmooth sets: functional characterizations and related concepts. Trans. Am. Math. Soc. 357(4), 1275–1301 (2004)

    Article  MathSciNet  Google Scholar 

  2. Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods. Math. Program. 137(1–2), 91–129 (2013)

    Article  MathSciNet  Google Scholar 

  3. Aybat, N.S., Iyengar, G.: A unified approach for minimizing composite norms. Math. Program. 144(1–2), 181–226 (2014)

    Article  MathSciNet  Google Scholar 

  4. Banjac, G., Margellos, K., Goulart, P.J.: On the convergence of a regularized Jacobi algorithm for convex optimization. IEEE Trans. Autom. Control 63(4), 1113–1119 (2018)

    Article  MathSciNet  Google Scholar 

  5. Bernard, F., Thibault, L.: Uniform prox-regularity of functions and epigraphs in Hilbert spaces. Nonlinear Anal. 60, 187–207 (2005)

    Article  MathSciNet  Google Scholar 

  6. Bolte, J., Daniilidis, A., Ley, O., Mazet, L.: Characterizations of Łojasiewicz inequalities: subgradient flows, talweg, convexity. Trans. Am. Math. Soc. 362(6), 3319–3363 (2010)

    Article  Google Scholar 

  7. Bolte, J., Nguyen, T.P., Peypouquet, J., Suter, B.W.: From error bounds to the complexity of first-order descent methods for convex functions. Math. Program. 165(2), 471–507 (2017)

    Article  MathSciNet  Google Scholar 

  8. Bonettini, S., Loris, I., Porta, F., Prato, M.: Variable metric inexact line-search-based methods for nonsmooth optimization. SIAM J. Optim. 26(2), 891–921 (2016)

    Article  MathSciNet  Google Scholar 

  9. Burke, J.V., Deng, S.: Weak sharp minima revisited, part I: basic theory. Control Cybern. 31, 439–469 (2002)

    MATH  Google Scholar 

  10. Candés, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005)

    Article  MathSciNet  Google Scholar 

  11. Carpentier, P., Cohen, G.: Décomposition-Coordination en Optimisation Déterministe et Stochastique. Springer, Berlin (2017)

    Book  Google Scholar 

  12. Charisopoulos, V., Chen, Y., Davis, D., Díaz, M., Ding, L., Drusvyatskiy, D.: Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence. In: Foundations of Computational Mathematics, pp. 1–89 (2021)

  13. Chouzenoux, E., Pesquet, J.C., Repetti, A.: Variable metric forward–backward algorithm for minimizing the sum of a differentiable function and a convex function. J. Optim. Theory Appl. 162(1), 107–132 (2014)

    Article  MathSciNet  Google Scholar 

  14. Cohen, G.: Auxiliary problem principle and decomposition of optimization problems. J. Optim. Theory Appl. 32(3), 277–305 (1980)

    Article  MathSciNet  Google Scholar 

  15. Cohen, G., Zhu, D.: Decomposition and coordination methods in large scale optimization problems: the nondifferentiable case and the use of augmented Lagrangians. Adv. Large Scale Syst. 1, 203–266 (1984)

    MathSciNet  Google Scholar 

  16. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  Google Scholar 

  17. Dontchev, A., Rockafellar, R.T: Implicit Functions and Solution Mappings, Springer (2009)

  18. Drusvyatskiy, D., Lewis, A.S.: Error bounds, quadratic growth, and linear convergence of proximal methods. Math. Oper. Res. 43(3), 919–948 (2018)

    Article  MathSciNet  Google Scholar 

  19. Frankel, P., Garrigos, G., Peypouquet, J.: Splitting methods with variable metric for Kurdyka–Łojasiewicz functions and general convergence rates. J. Optim. Theory Appl. 165(3), 874–900 (2015)

    Article  MathSciNet  Google Scholar 

  20. Hoffman, A.J.: On approximate solutions of systems of linear inequalities. J. Res. Natl. Bureau Stand. 49, 263–265 (1952)

    Article  MathSciNet  Google Scholar 

  21. Hu, Y., Li, C., Meng, K., Qin, J., Yang, X.: Group sparse optimization via \(\ell _{p, q}\) regularization. J. Mach. Learn. Res. 18(1), 960–1011 (2017)

    MathSciNet  Google Scholar 

  22. Karimi, H., Nutini, J., Schmidt, M.: Linear convergence of gradient and proximal-gradient methods under the Polyak–Łojasiewicz condition. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Springer, Cham, pp. 795–811 (2016)

  23. Kruger Alexander, Y., Lopez, Marco A., Yang, X.Q., Zhu, J.X.: Holder error bounds and Holder calmness with applications to convex semi-infinite optimization. Set-Valued Var. Anal. 27, 995–1023 (2019)

  24. Li, G., Pong, T.K.: Calculus of the exponent of Kurdyka–Łojasiewicz inequality and its applications to linear convergence of first-order methods. Found. Comput. Math. 18(5), 1199–1232 (2018)

    Article  MathSciNet  Google Scholar 

  25. Lojasiewicz, S.: A topological property of real analytic subsets (in French). Coll du. CNRS. Les Equ. Aux derivees Partielles 87–89 (1963)

  26. Luo, Z.Q., Tseng, P.: Error bound and convergence analysis of matrix splitting algorithms for the affine variational inequality problem. SIAM J. Optim. 2(1), 43–54 (1992)

    Article  MathSciNet  Google Scholar 

  27. Mordukhovich, B.S: Variational Analysis and Generalized Differentiation I: Basic Theory, vol. 330, Springer (2006)

  28. Necoara, I., Nesterov, Y., Glineur, F.: Linear convergence of first order methods for non-strongly convex optimization. Math. Program. 1–39 (2018)

  29. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course, vol. 87, Springer (2013)

  30. Ortega, J.M., Rheinboldt, W.C.: Iterative Solution of Nonlinear Equations in Several Variables, vol. 30, SIAM (1970)

  31. Pang, J.S.: Error bounds in mathematical programming. Math. Program. 79, 299–332 (1997)

    MathSciNet  MATH  Google Scholar 

  32. Polyak, B.T.: Gradient methods for minimizing functionals (in Russian). Z. Vychislitel’noı Mat. Matematicheskoı Fiziki 643–653 (1963)

  33. Robinson, S.M.: Some continuity properties of polyhedral multifunctions. Math. Oper. Res. 5, 206–214 (1980)

    Article  Google Scholar 

  34. Rockafellar, R.T., Wets, R.J.B.: Variational Analysis, vol. 317, Springer (2009)

  35. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B (Methodol.) 267–288

  36. Tseng, P., Yun, S.: A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. 117(1–2), 387–423 (2009)

    Article  MathSciNet  Google Scholar 

  37. Wang, X., Jane, J.Y., Yuan, X., Zeng, S., Zhang, J.: Perturbation techniques for convergence analysis of proximal gradient method and other first-order algorithms via variational analysis. Set Valued Var. Anal. (2021). https://doi.org/10.1007/s11228-020-00570-0

    Article  Google Scholar 

  38. Zhang, H.: New analysis of linear convergence of gradient-type methods via unifying error bound conditions. Math. Program. 180, 371–416 (2020)

    Article  MathSciNet  Google Scholar 

  39. Zhang, H., Dai, Y.H., Guo, L., Peng, W.: Proximal-like incremental aggregated gradient method with linear convergence under Bregman distance growth conditions. In: Mathematics of Operations Research, pp. 1–21 (2019)

  40. Zhu, D., Marcotte, P.: An extended descent framework for variational inequalities. J. Optim. Theory Appl. 80(2), 349–366 (1994)

    Article  MathSciNet  Google Scholar 

  41. Zhu, D., Deng, S.: A Variational Approach on Level Sets and Linear Convergence of Variable Bregman Proximal Gradient Method for Nonconvex Optimization Problems. arXiv preprint arXiv:1905.08445 (2019)

  42. Zhu, D., Deng, S., Li, M., Zhao, L.: Level-Set Subdifferential Error Bounds and Linear Convergence of Variable Bregman Proximal Gradient Method. arXiv preprint arXiv:2008.13627 (2020)

Download references

Acknowledgements

This research was supported by NSFC 71871140, CQ-NCAM-2021-02, P2017SC01

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sien Deng.

Additional information

Communicated by Xiaoqi Yang.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, D., Deng, S., Li, M. et al. Level-Set Subdifferential Error Bounds and Linear Convergence of Bregman Proximal Gradient Method. J Optim Theory Appl 189, 889–918 (2021). https://doi.org/10.1007/s10957-021-01865-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-021-01865-4

Keywords

Navigation