Abstract
The conjugate gradient method is a powerful solution scheme for solving unconstrained optimization problems, especially for large-scale problems. However, the convergence rate of the method without restart is only linear. In this paper, we will consider an idea contained in [16] and present a new restart technique for this method. Given an arbitrary descent direction d t and the gradient g t , our key idea is to make use of the BFGS updating formula to provide a symmetric positive definite matrix P t such that d t =−P t g t , and then define the conjugate gradient iteration in the transformed space. Two conjugate gradient algorithms are designed based on the new restart technique. Their global convergence is proved under mild assumptions on the objective function. Numerical experiments are also reported, which show that the two algorithms are comparable to the Beale–Powell restart algorithm.
Similar content being viewed by others
References
E.M.L. Beale, A derivative of conjugate gradients, in: Numerical Methods for Nonlinear Optimization, ed. F.A. Lootsma (Academic Press, London, 1972) pp. 39–43.
E.G. Birgin and J.M. Martínez, A spectral conjugate gradient method for unconstrained optimization, Appl. Math. Optim. 43 (2001) 117–128.
G.C. Broyden, The convergence of a class of double rank minimization algorithms: 2. The new algorithms, J. Inst. Math. Appl. 6 (1970) 222–231.
A. Buckley, Conjugate gradient methods, in: Nonlinear Optimization, ed. M.J.D. Powell (Academic Press, London, 1982) pp. 17–22.
A. Cohen, Rate of convergence of several conjugate gradient algorithms, SIAM J. Numer. Anal. 9 (1972) 248–259.
H.P. Crowder and P. Wolfe, Linear convergence of the conjugate gradient method, IBM J. Res. Development 16 (1972) 431–433.
Y.H. Dai and Y. Yuan, Convergence properties of the Beale-Powell restart algorithm, Sci. China Ser. A 41 (1998) 1142–1150.
Y.H. Dai and Y. Yuan, A three-parameter family of nonlinear conjugate gradient methods, Math. Comp. (to appear).
R. Fletcher and C. Reeves, Function minimization by conjugate gradients, Comput. J. 7 (1964) 149–154.
J.C. Gilbert and J. Nocedal, Global convergence properties of conjugate gradient methods for optimization, SIAM J. Optim. 2(1) (1992) 21–42.
G.P. McCormick and K. Ritter, Alternative proofs of the convergence properties of the conjugategradient method, J. Optim. Theory Appl. 13(5) (1975) 497–518.
M.F. McGuire and P. Wolfe, Evaluating a restart procedure for conjugate gradients, Report RC-4382, IBM Research Center, Yorktown Heights (1973).
J.J. Moré, B.S. Garbow and K.E. Hillstrom, Testing unconstrained optimization software, ACM Trans. Math. Software 7 (1981) 17–41.
E. Polak and G. Ribière, Note sur la convergence de directions conjugées, Rev. Française Inform. Rech. Oper. 3e Année 16 (1969) 35–43.
B.T. Polyak, The conjugate gradient method in extremal problems, USSR Comput. Math. Math. Phys. 9 (1969) 94–112.
M.J.D. Powell, Restart procedures of the conjugate gradient method, Math. Program. 2 (1977) 241–254.
M.J.D. Powell, Nonconvex minimization calculations and the conjugate gradient method, in: Lecture Notes in Mathematics, Vol. 1066 (Springer, New York, 1984) pp. 122–141.
D.F. Shanno and K.H. Phua, Remark on algorithm 500: Minimization of unconstrained multivariate functions, ACM Trans. Math. Software 6 (1980) 618–622.
D.F. Shanno, Conjugate gradient methods with inexact line searches, Math. Oper. Res. 3 (1978) 244–256.
Y. Yuan, Numerical Methods for Nonliner Programming (Shanghai Scientific & Technical, 1993) (in Chinese).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Dai, YH., Liao, LZ. & Li, D. On Restart Procedures for the Conjugate Gradient Method. Numerical Algorithms 35, 249–260 (2004). https://doi.org/10.1023/B:NUMA.0000021761.10993.6e
Issue Date:
DOI: https://doi.org/10.1023/B:NUMA.0000021761.10993.6e