Skip to main content
Log in

Error bounds and convergence analysis of feasible descent methods: a general approach

  • Part I: Surveys
  • Published:
Annals of Operations Research Aims and scope Submit manuscript

Abstract

We survey and extend a general approach to analyzing the convergence and the rate of convergence of feasible descent methods that does not require any nondegeneracy assumption on the problem. This approach is based on a certain error bound for estimating the distance to the solution set and is applicable to a broad class of methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. A. Auslender,Optimisation Méthodes Numériques (Masson, Paris, 1976).

    Google Scholar 

  2. D.P. Bertsekas, On the Goldstein-Levitin-Polyak gradient projection method, IEEE Trans. Auto. Contr. AC-21 (1976) 174–184.

    Article  Google Scholar 

  3. D.P. Bertsekas, Projected Newton methods for optimization problems with simple constraints, SIAM J. Contr. Optim. 20 (1982) 221–246.

    Article  Google Scholar 

  4. D.P. Bertsekas and E. Gafni, Projection methods for variational inequalities with application to the traffic assignment problem, Math. Prog. Study 17 (1982) 139–159.

    Google Scholar 

  5. D.P. Bertsekas and J.N. Tsitsiklis,Parallel and Distributed Computation: Numerical Methods (Prentice-Hall, Englewood Cliffs, NJ, 1989).

    Google Scholar 

  6. J.F. Bonnans, A variant of a projected variable metric method for bound constrained optimization problems, Working paper, INRIA, France (1983).

    Google Scholar 

  7. L.M. Bregman, The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming, USSR Comp. Math. Math. Phys. 7 (1967) 200–217.

    Article  Google Scholar 

  8. Y.C. Cheng, On the gradient-projection method for solving the nonsymmetric linear complementarity problem, Appl. Math. Optim. 43 (1984) 527–540.

    Article  Google Scholar 

  9. C.W. Cryer, The solution of a quadratic programming problem using systematic over-relaxation, SIAM J. Contr. Optim. 9 (1971) 385–392.

    Article  Google Scholar 

  10. D. D'Esopo, A convex programming procedure, Naval Res. Logist. Quart. 6 (1959) 33–42.

    Article  Google Scholar 

  11. R.S. Dembo and U. Tulowizki, Local convergence analysis for successive inexact quadratic programming methods, Working Paper, School of Organization and Management, Yale University, New Haven, CT (1984).

    Google Scholar 

  12. J.C. Dunn, Global and asymptotic convergence rate estimates for a class of projected gradient processes, SIAM J. Contr. Optim. 19 (1981) 368–400.

    Article  Google Scholar 

  13. J.C. Dunn, On the convergence of projected gradient processes to singular critical points, J. Optim. Theory Appl. 55 (1987) 203–216.

    Article  Google Scholar 

  14. E.M. Gafni and D.P. Bertsekas, Two-metric projection methods for constrained optimization, SIAM J. Contr. Optim. 22 (1984) 936–964.

    Article  Google Scholar 

  15. M. Gawande and J.C. Dunn, Variable metric gradient projection processes in convex feasible sets defined by nonlinear inequalities, Appl. Math. Optim. 17 (1988) 103–119.

    Article  Google Scholar 

  16. A.A. Goldstein, Convex programming in Hilbert space, Bull. Am. Math. Soc. 70 (1964) 709–710.

    Article  Google Scholar 

  17. A.A. Goldstein, On gradient projection,Proc. 12th Ann. Allerton Conf. on Circuits and Systems, Allerton Park, IL (1974) 38–40.

    Google Scholar 

  18. O. Güler, On the convergence of the proximal point algorithm for convex minimization, SIAM J. Contr. Optim. 29 (1991) 403–419.

    Article  Google Scholar 

  19. C. Hildreth, A quadratic programming procedure, Naval Res. Logist. Quart. 4 (1957) 79–85; see also Erratum, Naval Res. Logist. Quart. 4 (1957) 361.

    Article  Google Scholar 

  20. A.J. Hoffman, On approximate solutions of systems of linear inequalities, J. Res. Natl. Bur. Standards 49 (1952) 263–265.

    Google Scholar 

  21. A.N. Iusem, On dual convergence and the rate of primal convergence of Bregman's convex programming method, SIAM J. Optim. 1 (1991) 401–423.

    Article  Google Scholar 

  22. A.N. Iusem and A. De Pierro, On the convergence properties of Hildreth's quadratic programming algorithm, Math. Prog. 47 (1990) 37–51.

    Article  Google Scholar 

  23. H.B. Keller, On the solution of singular and semidefinite linear systems by iteration, SIAM J. Numer. Anal. 2 (1965) 281–290.

    Google Scholar 

  24. E.N. Khobotov, A modification of the extragradient method for the solution of variational inequalities and some optimization problems, Zh. Vychisl. Mat i Mat. Fiz. 27 (1987) 1462–1473.

    Google Scholar 

  25. G.M. Korpelevich, The extragradient method for finding saddle points and other problems, Ekon. i Mat. Metody, translated into English as Matecon 12 (1976) 747–756.

    Google Scholar 

  26. B.W. Kort and D.P. Bertsekas, Combined primal-dual and penalty methods for convex programming, SIAM J. Contr. Optim. 14 (1976) 268–294.

    Article  Google Scholar 

  27. J. Kruithof, Calculation of telephone traffic, De Ingenieur (E. Electrotechnik 3) 52 (1937) E15–25.

    Google Scholar 

  28. W. Li, Remarks on convergence of matrix splitting algorithm for the symmetric linear complementarity problem, SIAM J. Optim. 3 (1993) 155–163.

    Article  Google Scholar 

  29. Y.Y. Lin and J.-S. Pang, Iterative methods for large convex quadratic programs: A survey, SIAM J. Contr. Optim. 25 (1987) 383–411.

    Article  Google Scholar 

  30. D.G. Luenberger,Linear and Nonlinear Programming (Addison-Wesley, Reading, MA, 1984).

    Google Scholar 

  31. Z.-Q. Luo and P. Tseng, On global error bound for a class of monotone affine variational inequality problems, Oper. Res. Lett. 11 (1992) 159–165.

    Article  Google Scholar 

  32. Z.-Q. Luo and P. Tseng, Error bound and reduced gradient projection algorithms for convex minimization over a polyhedral set, Technical Report, Department of Electrical and Computer Engineering, McMaster University, Hamilton, Ontario and Department of Mathematics, University of Washington, Seattle, WA (December 1990); to appear in SIAM J. Optim.

    Google Scholar 

  33. Z.-Q. Luo and P. Tseng, On the convergence of a matrix splitting algorithm for the symmetric monotone linear complementarity problem, SIAM J. Contr. Optim. 29 (1991) 1037–1060.

    Article  Google Scholar 

  34. Z.-Q. Luo and P. Tseng, On the rate of convergence of a class of distributed asynchronous routing algorithms, Technical Report, Department of Electrical and Computer Engineering, McMaster University, Hamilton, Ontario and Department of Mathematics, University of Washington, Seattle, WA (May 1991), to appear in Math. Oper. Res. 18 (1993).

    Google Scholar 

  35. Z.-Q. Luo and P. Tseng, On the convergence rate of dual ascent methods for strictly convex minimization, to appear in Math. Oper. Res. 18 (1993).

  36. Z.-Q. Luo and P. Tseng, On the convergence of the coordinate descent method for convex differentiable minimization, J. Optim. Theory Appl. 72 (1992) 7–35.

    Article  Google Scholar 

  37. Z.-Q. Luo and P. Tseng, Error bound and the convergence analysis of matrix splitting algorithms for the affine variational inequality problem, SIAM J. Optim. 2 (1992) 43–54.

    Article  Google Scholar 

  38. Z.-Q. Luo and P. Tseng, On the linear convergence of descent methods for convex essentially smooth minimization, SIAM J. Contr. Optim. 30 (1992) 408–425.

    Article  Google Scholar 

  39. O.L. Mangasarian, Solution of symmetric linear complementarity problems by iterative methods, J. Optim. Theory Appl. 22 (1977) 465–485.

    Article  Google Scholar 

  40. O.L. Mangasarian, Sparsity-preserving SOR algorithms for separable quadratic and linear programming, Comp. Oper. Res. 11 (1984) 105–112.

    Article  Google Scholar 

  41. O.L. Mangasarian, Error bounds for nondegenerate monotone linear complementarity problems, Math. Prog. 48 (1990) 437–445.

    Article  Google Scholar 

  42. O.L. Mangasarian, Convergence of iterates of an inexact matrix splitting algorithm for the symmetric monotone linear complementarity problem, SIAM J. Optim. 1 (1991) 114–122.

    Article  Google Scholar 

  43. O.L. Mangasarian, Global error bounds for monotone affine variational inequality problems, Lin. Alg. Appl. 174 (1992) 153–163.

    Article  Google Scholar 

  44. O.L. Mangasarian and R. De Leone, Error bounds for strongly convex programs and (super)linearly convergent iterative schemes for the least 2-norm solution of linear programs. Appl. Math. Optim. 17 (1988) 1–14.

    Article  Google Scholar 

  45. O.L. Mangasarian and T.-H. Shiau, Error bounds for monotone linear complementarity problems, Math. Prog. 36 (1986) 81–89.

    Article  Google Scholar 

  46. P. Marcotte, Application of Khobotov's algorithm to variational inequalities and network equilibrium problems, Inf. Syst. Oper. Res. 29 (1991) 258–270.

    Google Scholar 

  47. B. Martinet, Regularisation d'inéquations variationnelles par approximations successives, Rev. Française d'Auto. et Inform. Rech. Opér. 4 (1970) 154–159.

    Google Scholar 

  48. B. Martinet, Determination approchée d'un point fixe d'une application pseudo-contractante, C.R. Acad. Sci. Paris 274 (1972) 163–165.

    Google Scholar 

  49. R. Mathias, and J.-S. Pang, Error bounds for the linear complementarity problem with aP-matrix, Lin. Alg. Appl. 132 (1990) 123–136.

    Article  Google Scholar 

  50. J.J. Moré, Gradient projection techniques for large-scale optimization problems,Proc. 28th Conf. on Decision and Control, Tampa, FL (December 1989).

  51. J.M. Ortega and W.C. Rheinboldt,Iterative Solution of Nonlinear Equations in Several Variables (Academic Press, New York, 1970).

    Google Scholar 

  52. J.-S. Pang, On the convergence of a basic iterative method for the implicit complementarity problem, J. Optim. Theory Appl. 37 (1982) 149–162.

    Article  Google Scholar 

  53. J.-S. Pang, Necessary and sufficient conditions for the convergence of iterative methods for the linear complementarity problem, J. Optim. Theory Appl. 42 (1984) 1–17.

    Article  Google Scholar 

  54. J.-S. Pang, More results on the convergence of iterative methods for the symmetric linear complementarity problem, J. Optim. Theory Appl. 49 (1986) 107–134.

    Article  Google Scholar 

  55. J.-S. Pang, Inexact Newton methods for the nonlinear complementarity problem, Math. Prog. 36 (1986) 54–71.

    Article  Google Scholar 

  56. J.-S. Pang, A posteriori error bounds for the linearly-constrained variational inequality problem, Math. Oper. Res. 12 (1987) 474–484.

    Article  Google Scholar 

  57. S.M. Robinson, Some continuity properties of polyhedral multifunctions, Math. Prog. Study 14 (1981) 206–214.

    Google Scholar 

  58. S.M. Robinson, Generalized equations and their solutions, Part II: Applications to nonlinear programming, Math. Prog. Study 14 (1982) 200–221.

    Google Scholar 

  59. R.T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Contr. Optim. 14 (1976) 877–898.

    Article  Google Scholar 

  60. R.T. Rockafellar, Augmented Lagrangians and applications of the proximal point algorithm in convex programming, Math. Oper. Res. 1 (1976) 97–116.

    Article  Google Scholar 

  61. P. Tseng, Dual Ascent methods for problems with strictly convex costs and linear constraints: A unified approach, SIAM J. Contr. Optim. 28 (1990) 214–242.

    Article  Google Scholar 

  62. P. Tseng, On the rate of convergence of a partially asynchronous gradient projection algorithm, SIAM J. Optim, 1 (1991) 603–619.

    Article  Google Scholar 

  63. J.N. Tsitsiklis and D.P. Bertsekas, Distributed asynchronous optimal routing in data networks, IEEE Trans. Auto. Contr. AC-31 (1986) 325–332.

    Article  Google Scholar 

  64. J.N. Tsitsiklis, D.P. Bertsekas and M. Athans, Distributed asynchronous deterministic and stochastic gradient optimization algorithms, IEEE Trans. Auto. Contr. AC-31 (1986) 803–812.

    Article  Google Scholar 

  65. J. Warga, Minimizing certain convex functions, J. Soc. Indust. Appl. Math. 11 (1963) 588–593.

    Article  Google Scholar 

  66. E.S. Levitin and B.T. Polyak, Constrained minimization methods, USSR Comp. Math. Math. Phys. 6 (1965) 1–50.

    Article  Google Scholar 

  67. J.C. Dunn, A subspace decomposition principle for scaled gradient projection methods: global theory, SIAM J. Contr. Optim. 29 (1991) 1160–1175.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

The research of the first author is supported by the Natural Sciences and Engineering Research Council of Canada, Grant No. OPG0090391, and the research of the second author is supported by the National Science Foundation, Grant No. CCR-9103804.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Luo, ZQ., Tseng, P. Error bounds and convergence analysis of feasible descent methods: a general approach. Ann Oper Res 46, 157–178 (1993). https://doi.org/10.1007/BF02096261

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02096261

Keywords

Navigation