Abstract
Convergence properties of restarted conjugate gradient methods are investigated for the case where the usual requirement that an exact line search be performed at each iteration is relaxed.
The objective function is assumed to have continuous second derivatives and the eigenvalues of the Hessian are assumed to be bounded above and below by positive constants. It is further assumed that a Lipschitz condition on the second derivatives is satisfied at the location of the minimum.
A class of descent methods is described which exhibitn-step quadratic convergence when restarted even though errors are permitted in the line search. It is then shown that two conjugate gradient methods belong to this class.
Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.References
A.I. Cohen, “Rate of convergence of several conjugate gradient algorithms”,SIAM Journal on Numerical Analysis 9 (1972) 248–259.
H. Crowder and P. Wolfe, “Linear convergence of the conjugate gradient method”, IBM Research Rept. RC 3330 (1971).
L.C.W. Dixon, “Conjugate gradient algorithms: quadratic termination without linear searches”, Tech. Rept. No. 38, Numerical Optimization Center, The Hatfield Polytechnic (1972).
R. Fletcher and C.M. Reeves, “Function minimization by conjugate gradients”,The Computer Journal 7 (1964) 149–154.
M. Hestenes and E. Stiefel, “Method of conjugate gradients for solving linear systems”,Journal of Research of the National Bureau of Standards 49 (1952) 409–436.
K. Kawamura and R.A. Volz, “On the rate of convergence of the conjugate gradient reset method with inaccurate linear minimizations”,IEEE Transactions on Automatic Control AC18 (1973) 360–366.
R. Klessig and E. Polak, “Efficient implementations of the Polak—Ribiere conjugate gradient algorithm”,SIAM Journal on Control 10 (1972) 524–549.
M.L. Lenard, “Practical convergence conditions for unconstrained optimization”, Dissertation, Columbia University (1971).
M.L. Lenard, “Practical convergence conditions for unconstrained optimization”,Mathematical Programming 4 (1973) 309–323.
G.P. McCormick and K. Ritter, “Methods of conjugate directions versus quasi-Newton methods”,Mathematical Programming 3 (1972) 101–116.
G.P. McCormick and K. Ritter, “Alternative proofs of the convergence properties of the conjugate-gradient method”,Journal of Optimization Theory and Applications 13(1974) 497–518.
E. Polak and G. Ribiere, “Note sur la convergence de methodes de directions conjugées”,Revue Francaise d'Automatique, Informatique et Recherche Opérationelle 3, Serie R (1969) 35–43.
B.T. Polyak, “The method of conjugate gradient in extremum problems”,U.S.S.R. Computational Mathematics and Mathematical Physics (English translation) 9 (1969) 94–112.
H.W. Sorenson, “Comparison of some conjugate direction procedures for function minimization”,Journal of the Franklin Institue 288 (1969) 421–441.
P. Wolfe, “Convergence conditions for ascent methods”,SIAM Review 11 (1969) 226–235.
P. Wolfe, “Convergence theory in nonlinear programming”, in: J. Abadie, ed.,Integer and nonlinear programming (North-Holland, Amsterdam, 1970) pp. 1–36.
Author information
Authors and Affiliations
Additional information
Sponsored by the United States Army under Contract No. DA-31-124-ARO-D-462.
Rights and permissions
About this article
Cite this article
Lenard, M.L. Convergence conditions for restarted conjugate gradient methods with inaccurate line searches. Mathematical Programming 10, 32–51 (1976). https://doi.org/10.1007/BF01580652
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/BF01580652