Skip to main content
Log in

On the projected subgradient method for nonsmooth convex optimization in a Hilbert space

  • Published:
Mathematical Programming Submit manuscript

Abstract

We consider the method for constrained convex optimization in a Hilbert space, consisting of a step in the direction opposite to anε k -subgradient of the objective at a current iterate, followed by an orthogonal projection onto the feasible set. The normalized stepsizesε k are exogenously given, satisfyingΣ k=0 αk = ∞, Σ k=0 α 2k < ∞, andε k is chosen so thatε k ⩽ μαk for someμ > 0. We prove that the sequence generated in this way is weakly convergent to a minimizer if the problem has solutions, and is unbounded otherwise. Among the features of our convergence analysis, we mention that it covers the nonsmooth case, in the sense that we make no assumption of differentiability off, and much less of Lipschitz continuity of its gradient. Also, we prove weak convergence of the whole sequence, rather than just boundedness of the sequence and optimality of its weak accumulation points, thus improving over all previously known convergence results. We present also convergence rate results. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. R. Correa, C. Lemaréchal, Convergence of some algorithms for convex minimization, Mathematical Programming 62 (1993) 261–275.

    Google Scholar 

  2. B.T. Polyak, Introduction to Optimization, Optimization Software, New York, 1987.

    Google Scholar 

  3. J.R. Giles, Convex analysis with applications in differentiation of convex functions, in: Research Notes in Mathematics, vol. 58, Pitman, Boston, MA, 1982.

    Google Scholar 

  4. R.T. Rockafellar, Local boundedness of nonlinear monotone operators, Michigan Mathematical Journal 16 (1969) 397–407.

    Google Scholar 

  5. A. Brønsted, R.T. Rockafellar, On the subdifferentiability of convex functions, Proceedings of the American Mathematical Society 16 (1965) 605–611.

    Google Scholar 

  6. B.T. Polyak, A general method of solving extremum problems, Soviet Mathematics Doklady 8 (1967) 593–597.

    Google Scholar 

  7. Yu.M. Ermoliev, Methods for solving nonlinear extremal problems, Cybernetics 2 (1966) 1–17.

    Google Scholar 

  8. Ya.I. Alber, On minimization of smooth functional by gradient methods, USSR Computational Mathematics and Mathematical Physics 11 (1971) 752–758.

    Google Scholar 

  9. Ya.I. Alber, Recurrence relations and variational inequalities, Soviet Mathematics Doklady 27 (1983) 511–517.

    Google Scholar 

  10. M.V. Solodov, S.K. Zavriev, Error stability properties of generalized gradient-type algorithms Journal of Optimization Theory and Applications 98 (1998), to appear.

  11. E. Golstein, N. Tretyakov, Modified Lagrangian Functions, Nauka, Moscow, 1989.

    Google Scholar 

  12. Yu. Nesterov, Effective Methods in Nonlinear Programming, Nauka, Moscow, 1989.

    Google Scholar 

  13. M. Minoux, Mathematical Programming, Theory and Algorithms, Wiley, New York, 1986.

    Google Scholar 

  14. V.A. Bereznyev, V.G. Karmanov, A.A. Tretyakov, The stabilizing properties of the gradient method, USSR Computational Mathematics and Mathematical Physics 26 (1986) 84–85.

    Google Scholar 

  15. R. Burachik, L.M. Graña Drummond, A.N. Iusem, B.F. Svaiter, Full convergence of the steepest descent method with inexact line searches, Optimization 32 (1995) 137–146.

    Google Scholar 

  16. K. Kiwiel, K. Murty, Convergence of the steepest descent method for minimizing quasi convex functions, Journal of Optimization Theory and Applications 89 (1996) 221–226.

    Google Scholar 

  17. B.F. Svaiter, Steepest descent method in Hilbert spaces with Armijo search (to be published).

  18. Yu.M. Ermoliev, On the method of generalized stochastic gradients and quasi-Fejér sequences, Cybernetics 5 (1969) 208–220.

    Google Scholar 

  19. A.N. Iusem, B.F. Svaiter, M. Teboulle, Entropy-like proximal methods in convex programming, Mathematics of Operations Research 19 (1994) 790–814.

    Google Scholar 

  20. K. Knopp, Theory and Application of Infinite Series, Dover, New York, 1990.

  21. Ya.I. Alber, A.N. Iusem, M.V. Solodov, Minimization of nonsmooth convex functionals in Banach spaces, Journal of Convex Analysis; to appear.

Download references

Author information

Authors and Affiliations

Authors

Additional information

Research of this author was partially supported by CNPq grant nos. 301280/86 and 300734/95-6.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Alber, Y.I., Iusem, A.N. & Solodov, M.V. On the projected subgradient method for nonsmooth convex optimization in a Hilbert space. Mathematical Programming 81, 23–35 (1998). https://doi.org/10.1007/BF01584842

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01584842

Keywords

Navigation