Abstract
This paper develops convergence theory of the gradient projection method by Calamai and Moré (Math. Programming, vol. 39, 93–116, 1987) which, for minimizing a continuously differentiable optimization problem min{f(x) : x ε Ω} where Ω is a nonempty closed convex set, generates a sequence xk+1 = P(xk − αk ∇ f(xk)) where the stepsize αk > 0 is chosen suitably. It is shown that, when f(x) is a pseudo-convex (quasi-convex) function, this method has strong convergence results: either xk → x* and x* is a minimizer (stationary point); or ‖xk‖ → ∞ arg min{f(x) : x ε Ω} = ∅, and f(xk) ↓ inf{f(x) : x ε Ω}.
Similar content being viewed by others
References
D.P. Bertsekas, “On the Goldstein-Levitin-Polyak gradient projection method,” IEEE Trans. Automat. Contr., vol. AC-21, pp. 174–184, 1976.
D.P. Bertsekas, “Projected Newton method for optimization problems with simple constraints,” SIAM J. Control Optim., vol. 20, pp. 221–246, 1982.
J.V. Burke, J.J. Moré, and G. Toraldo, “Convergence properties of trust region methods for linear and convex constraints,” Math. Programming, vol. 47, pp. 305–336, 1990.
P.H. Calamai and J.J. Moré, “Projected gradient methods for linearly constrained problems,” Math. Programming, vol. 39, pp. 93–116, 1987.
Y.C. Cheng, “On the gradient-projection method for solving the nonsymmetric linear complementarity problem,” J. Optim. Theory Appl., vol. 43, pp. 527–541, 1984.
C.C. Chou, K.F. Ng, and J.S. Pang, “Minimizing and stationary sequences of optimization problems,”Technical Report, The Johns Hopkins University, Baltimore, Maryland 21218-2692, USA, 1996.
A.R. Conn, N.I.M. Gould, and PH.L. Toint, “Global convergence of a class of trust region algorithms for optimization with simple bounds,” SIAM J. Numer. Anal., vol. 25, no. 2, pp. 433–460, 1988.
A.R. Conn, N.I.M. Gould, and PH.L. Toint, “Testing a class of methods for solving minimization problems with simple bounds on the variable,” Mathematics of Computation, vol. 50, pp. 399–430, 1988.
J.M. Danskin, “The theory of max-min, with applications,” SIAM J. Appl. Math., vol. 14, pp. 641–664, 1966.
J.C. Dunn, “On the classification of singular and nonsingular extremals for the pontryagin maximum principle,” J. Math. Anal. Appl., vol. 17, pp. 1–36, 1967.
J.C. Dunn, “Global and asymptotic convergence rate estimates for a class of projected gradient processes,” SIAM J. Control Optim., vol. 19, pp. 368–400, 1981.
J.C. Dunn, “On the convergence of projected gradient processes to singular attractors,” J. Optim. Theory Appl., vol. 55, pp. 203–215, 1987.
J.C. Dunn, “A subspace decomposition principle for scaled gradient projection methods: Global theory,” SIAM J. Control Optim., vol. 29, pp. 1160–1175, 1991.
E.M. Gafni and D.P. Bertsekas, “Convergence of a gradient projection method,” Laboratory for Information and Decision Systems Report No.P-121, Massachusetts Institute of Technology, Cambridge, MA, 1982.
E.M. Gafni and D.P. Bertsekas, “Two-metric projection methods for constrained optimization,” SIAM J. Control. Optim., vol. 22, pp. 936–964, 1984.
A.A. Goldstein, “Convex programming in Hilbert space,” Bull. Amer. Math. Soc., vol. 70, pp. 709–710, 1964.
A.A. Goldstein, “On gradient projection,” in Proc. 12th Ann. Allerton Conference and Circuits and Systems, Allerton Park, IL, 1974, pp. 38-40.
K.C. Kiwiel and K. Murty, “Convergence of the steepest descent method for minimizing quasiconvex function,” J. Optim. Theory Appl., vol. 89, pp. 221–226, 1996.
E.S. Levitin and B.T. Polyak, “Constrained minimization problems,” USSR. Comput. Math. Math. Phys., vol. 6, pp. 1–50, 1966.
Z.Q. Luo and P. Tseng, “On the convergence of the coordinate descent method for convex differentiable minimization,” J. Optim. Theory Appl., vol. 72, pp. 7–35, 1992.
Z.Q. Luo and P. Tseng, “On the linear convergence of descent methods for convex essentially smooth minimization,” SIAM J. Control Optim., vol. 30, pp. 408–425, 1992.
G.P. McCormick and R.A. Tapia, “The gradient projection method under mild differentiability conditions,” SIAM J. Control Optim., vol. 10, pp. 93–98, 1972.
R.R. Phelps, “The gradient projection method using Curry's steplength,” SIAM J. Control Optim., vol. 24, pp. 692–699, 1986.
B. Rustem, “A class of superlinearly convergent projection algorithms with relaxed stepsizes,” Appl. Math. Optim., vol. 12, pp. 29–43, 1984.
M.V. Solodov and P. Tseng, “Modified projection-type methods for monotone variational inequalities,” SIAM J. Control Optim., vol. 34, pp. 1814–1830, 1996.
C.Y. Wang, “Convergence characterizations of both newpivote method and simplified Levitin-Polyak gradient projection method,” Acta Math. Appl. Sinica, vol. 4, pp. 37–52, 1981. (In Chinese)
C.Y. Wang, “On convergence properties of an improved reduced gradient method,” Ke Xue Tongbao, vol. 17, pp. 1030-1033. (In Chinese)
F. Wu and S. Wu, “A modified Frank-Wolfe algorithm and its convergence properties,” Acta Math. Appl. Sinica, vol. 11, pp. 286–291, 1995.
G.L. Xue, “A family of gradient projection algorithms and their convergence properties,” Acta Math. Appl. Sinica, vol. 10, pp. 396–404, 1987. (In Chinese)
G.L. Xue, “On convergence properties of a Least-Distance programming procedure for minimization problems under linear constraints,” J. Optim. Theory Appl., vol. 50, pp. 365–370, 1986.
E.H. Zarantonello, “Projections on convex sets in Hilbert space and spectral theory,” in Contributions to Nonlinear Functional Analysis, E.H. Zarantonello (ed.), Academic Press: New York, 1971.
Rights and permissions
About this article
Cite this article
Wang, C., Xiu, N. Convergence of the Gradient Projection Method for Generalized Convex Minimization. Computational Optimization and Applications 16, 111–120 (2000). https://doi.org/10.1023/A:1008714607737
Issue Date:
DOI: https://doi.org/10.1023/A:1008714607737