Skip to main content
Log in

Quasi-monotone Subgradient Methods for Nonsmooth Convex Minimization

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

In this paper, we develop new subgradient methods for solving nonsmooth convex optimization problems. These methods guarantee the best possible rate of convergence for the whole sequence of test points. Our methods are applicable as efficient real-time stabilization tools for potential systems with infinite horizon. Preliminary numerical experiments confirm a high efficiency of the new schemes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. For the sake of notation, we assume that this subgradient is uniquely defined by the argument. At the points of nondifferentiability, this, in general, is not true. However, we assume that at such points the first-order oracle always returns the same answer. The same convention is used for all convex functions in this paper (e.g. prox-functions; see below).

  2. In paper [5], Beck and Teboulle justified a primal subgradient method, which works with Bregman distances: \(x_{t+1} = \min \limits _{x \in Q} \{ a_t \langle \nabla f(x_t), x \rangle + D(x_t,x) \}\), where \(D(x,y) = d(y) - d(x) - \langle \nabla d(x), y - x \rangle \). The rate of convergence of this method can be derived from the same inequality (10). In our terminology, this is a pure primal scheme since it does not maintain a linear model of the objective function.

  3. Recall that, for Euclidean framework with \(Q \equiv \mathbb {E}\), PGM coincides with MDM (9).

References

  1. Shor, N.Z.: Minimization Methods for Non-differentiable Functions. Springer, Berlin (1985)

    Book  MATH  Google Scholar 

  2. Polyak, B.T.: Introduction to Optimization. Software Inc., New York (1987)

    Google Scholar 

  3. Nemirovsky, A.S., Yudin, D.B.: Problem Complexity and Method Efficiency in Optimization. Wiley, New York (1983)

    MATH  Google Scholar 

  4. Nesterov, Yu.: Primal-dual subgradient methods for convex problems. Math. Program. 120, 261–283 (2009)

    Article  MathSciNet  Google Scholar 

  5. Beck, A., Teboulle, M.: Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31, 167–175 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  6. Nesterov, Yu.: Introductory Lectures on Convex Optimization. Kluwer, Boston (2004)

    Book  MATH  Google Scholar 

  7. Rockafellar, R.T.: Convex Analisys. Princeton University Press, Princeton (1970)

    Google Scholar 

Download references

Acknowledgments

The authors would like to thank two anonymous referees for careful reading and useful comments. Research results presented in the paper have been supported by a Grant “Action de recherche concertè ARC 04/09-315” from the “Direction de la recherche scientifique - Communautè française de Belgique” and by PremoLab, MIPT (RF gov. Grant, ag.11.G34.31.0073), with RFBR research Projects 13-01-12007 ofi_m, 14-01-00722-a.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu. Nesterov.

Additional information

Communicated by Amir Beck.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nesterov, Y., Shikhman, V. Quasi-monotone Subgradient Methods for Nonsmooth Convex Minimization. J Optim Theory Appl 165, 917–940 (2015). https://doi.org/10.1007/s10957-014-0677-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-014-0677-5

Keywords

Mathematics Subject Classification

Navigation