Abstract
It is gradually accepted that the loss of orthogonality of the gradients in a conjugate gradient algorithm may decelerate the convergence rate to some extent. The Dai–Kou conjugate gradient algorithm (SIAM J Optim 23(1):296–320, 2013), called CGOPT, has attracted many researchers’ attentions due to its numerical efficiency. In this paper, we present an improved Dai–Kou conjugate gradient algorithm for unconstrained optimization, which only consists of two kinds of iterations. In the improved Dai–Kou conjugate gradient algorithm, we develop a new quasi-Newton method to improve the orthogonality by solving the subproblem in the subspace and design a modified strategy for the choice of the initial stepsize for improving the numerical performance. The global convergence of the improved Dai–Kou conjugate gradient algorithm is established without the strict assumptions in the convergence analysis of other limited memory conjugate gradient methods. Some numerical results suggest that the improved Dai–Kou conjugate gradient algorithm (CGOPT (2.0)) yields a tremendous improvement over the original Dai–Kou CG algorithm (CGOPT (1.0)) and is slightly superior to the latest limited memory conjugate gradient software package CG\(\_ \)DESCENT (6.8) developed by Hager and Zhang (SIAM J Optim 23(4):2150–2168, 2013) for the CUTEr library.








Similar content being viewed by others
Notes
Available at https://web.xidian.edu.cn/xdliuhongwei/en/paper.html.
References
Fletcher, R., Reeves, C.: Function minimization by conjugate gradients. Comput. J. 7(2), 149–154 (1964)
Hestenes, M.R., Stiefel, E.L.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49(6), 409–436 (1952)
Polak, E., Ribière, G.: Note sur la convergence de méthodes de directions conjugées. Rev. Fr. Inform. Rech. Opér. 3, 35–43 (1969)
Polyak, B.T.: The conjugate gradient method in extreme problems. USSR Comput. Math. Math. Phys. 9, 94–112 (1969)
Shanno, D.F.: On the convergence of a new conjugate gradient algorithm. SIAM J. Numer. Anal. 15(6), 1247–1257 (1978)
Perry, J. M.: A class of conjugate gradient algorithms with a two-step variable-metric memory. Discussion Paper 269, Center for Mathematical Studies in Economics and Management Sciences, Northwestern University, Evanston, Illinois (1977)
Dai, Y.H., Yuan, Y.X.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10(1), 177–182 (1999)
Dai, Y.H., Kou, C.X.: A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search. SIAM J. Optim. 23(1), 296–320 (2013)
Hager, W.W., Zhang, H.C.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16(1), 170–192 (2005)
Zhang, L., Zhou, W.J., Li, D.H.: Global convergence of a modified Fletcher-Reeves conjugate gradient method with Armijo-type line search. Numer. Math. 104, 561–572 (2006)
Liu, H.W., Liu, Z.X.: An efficient Barzilai–Borwein conjugate gradient method for unconstrained optimization. J. Optim. Theory Appl. 181(2), 608–633 (2019)
Dai, Y.H., Han, J.Y., Liu, G.H., et al.: Convergence properties of nonlinear conjugate gradient methods. SIAM J. Optim. 10(2), 345–358 (1999)
Dai, Y.H., Liao, L.Z.: New conjugacy conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43(1), 87–101 (2001)
Hager, W.W., Zhang, H.C.: A survey of nonlinear conjugate gradient methods. Pac. J. Optim. 2(1), 35–58 (2006)
Dai, Y.H., Yuan, Y.X.: Nonlinear Conjugate Gradient Methods. Shanghai Scientific and Technical Publishers, Shanghai (2000)
Hager, W.W., Zhang, H.C.: The limited memory conjugate gradient method. SIAM J. Optim. 23(4), 2150–2168 (2013)
Schmidt, E.: Über die Auflösung linearer Gleichungen mit Unendlich vielen unbekannten. Rend. Circ. Mat. Palermo. Ser. 1(25), 53–77 (1908)
Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45(1–3), 503–528 (1989)
Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (1999)
Hager, W.W., Zhang, H.C.: Algorithm 851:conjugate gradient CG\(\_ \)DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32(1), 113–137 (2006)
Biglari, F., Hassan, M.A., Leong, W.J.: New quasi-Newton methods via higher order tensor models. J. Comput. Appl. Math. 235(8), 2412–2422 (2011)
Wei, Z.X., Li, G.Y., Qi, L.Q.: New quasi-Newton methods for unconstrained optimization problems. Appl. Math. Comput. 175(2), 1156–1188 (2006)
Li, D.H., Fukushima, M.: On the global convergence of BFGS method for nonconvex unconstrained optimization problems. SIAM J. Optim. 11(4), 1054–1064 (2001)
Yuan, Y.X.: A modified BFGS algorithm for unconstrained optimization. IMA J. Numer. Anal. 11(3), 325–332 (1991)
Dai, Y.H., Yuan, J.Y., Yuan, Y.X.: Modified two-point stepsize gradient methods for unconstrained optimization problems. Comput. Optim. Appl. 22(1), 103–109 (2002)
Liu, Z.X., Liu, H.W.: An efficient gradient method with approximate optimal stepsize for large-scale unconstrained optimization. Numer. Algorithms 78(1), 21–39 (2018)
Liu, Z.X., Liu, H.W.: An efficient gradient method with approximately optimal stepsize based on tensor model for unconstrained optimization. J. Optim. Theory Appl. 181(2), 608–633 (2019)
Tarzanagh, D.A., Reza Peyghami, M.: A new regularized limited memory BFGS-type method based on modified secant conditions for unconstrained optimization problems. J. Glob. Optim. 63, 709–728 (2015)
Barzilai, J., Borwein, J.M.: Two-point step size gradient methods. IMA J. Numer. Anal. 8(1), 141–148 (1988)
Andrei, N.: Open problems in nonlinear conjugate gradient algorithms for unconstrained optimization. Bull. Malays. Math. Sci. Soc. 34(2), 319–330 (2011)
Gould, N.I.M., Orban, D., Toint, P.L.: CUTEr and SifDec: a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29(4), 373–394 (2003)
Dolan, E.D., More, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)
Acknowledgements
We would like to thank the anonymous referees for their useful comments. We also would like to thank professors Hager, W. W. and Zhang, H. C. for their C code of CG\(\_ \)DESCENT (6.8). The third author’s work was partly supported by the Chinese NSF grants (Nos. 11631013 and 11971372) and Key Project of Chinese National Programs for Fundamental Research and Development (No. 2015CB856002). The first author’s work was supported by the National Natural Science Foundation of China (no. 11901561) and the Natural Science Foundation of Guangxi(No. 2018GXNSFBA281180).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Liu, Z., Liu, H. & Dai, YH. An improved Dai–Kou conjugate gradient algorithm for unconstrained optimization. Comput Optim Appl 75, 145–167 (2020). https://doi.org/10.1007/s10589-019-00143-4
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-019-00143-4
Keywords
- Conjugate gradient algorithm
- Limited memory
- Quasi-Newton method
- Preconditioned conjugate gradient algorithm
- Global convergence