Skip to main content
Log in

An improved Dai–Kou conjugate gradient algorithm for unconstrained optimization

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

It is gradually accepted that the loss of orthogonality of the gradients in a conjugate gradient algorithm may decelerate the convergence rate to some extent. The Dai–Kou conjugate gradient algorithm (SIAM J Optim 23(1):296–320, 2013), called CGOPT, has attracted many researchers’ attentions due to its numerical efficiency. In this paper, we present an improved Dai–Kou conjugate gradient algorithm for unconstrained optimization, which only consists of two kinds of iterations. In the improved Dai–Kou conjugate gradient algorithm, we develop a new quasi-Newton method to improve the orthogonality by solving the subproblem in the subspace and design a modified strategy for the choice of the initial stepsize for improving the numerical performance. The global convergence of the improved Dai–Kou conjugate gradient algorithm is established without the strict assumptions in the convergence analysis of other limited memory conjugate gradient methods. Some numerical results suggest that the improved Dai–Kou conjugate gradient algorithm (CGOPT (2.0)) yields a tremendous improvement over the original Dai–Kou CG algorithm (CGOPT (1.0)) and is slightly superior to the latest limited memory conjugate gradient software package CG\(\_ \)DESCENT (6.8) developed by Hager and Zhang (SIAM J Optim 23(4):2150–2168, 2013) for the CUTEr library.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. Available at https://web.xidian.edu.cn/xdliuhongwei/en/paper.html.

References

  1. Fletcher, R., Reeves, C.: Function minimization by conjugate gradients. Comput. J. 7(2), 149–154 (1964)

    Article  MathSciNet  Google Scholar 

  2. Hestenes, M.R., Stiefel, E.L.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49(6), 409–436 (1952)

    Article  MathSciNet  Google Scholar 

  3. Polak, E., Ribière, G.: Note sur la convergence de méthodes de directions conjugées. Rev. Fr. Inform. Rech. Opér. 3, 35–43 (1969)

    MATH  Google Scholar 

  4. Polyak, B.T.: The conjugate gradient method in extreme problems. USSR Comput. Math. Math. Phys. 9, 94–112 (1969)

    Article  Google Scholar 

  5. Shanno, D.F.: On the convergence of a new conjugate gradient algorithm. SIAM J. Numer. Anal. 15(6), 1247–1257 (1978)

    Article  MathSciNet  Google Scholar 

  6. Perry, J. M.: A class of conjugate gradient algorithms with a two-step variable-metric memory. Discussion Paper 269, Center for Mathematical Studies in Economics and Management Sciences, Northwestern University, Evanston, Illinois (1977)

  7. Dai, Y.H., Yuan, Y.X.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10(1), 177–182 (1999)

    Article  MathSciNet  Google Scholar 

  8. Dai, Y.H., Kou, C.X.: A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search. SIAM J. Optim. 23(1), 296–320 (2013)

    Article  MathSciNet  Google Scholar 

  9. Hager, W.W., Zhang, H.C.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16(1), 170–192 (2005)

    Article  MathSciNet  Google Scholar 

  10. Zhang, L., Zhou, W.J., Li, D.H.: Global convergence of a modified Fletcher-Reeves conjugate gradient method with Armijo-type line search. Numer. Math. 104, 561–572 (2006)

    Article  MathSciNet  Google Scholar 

  11. Liu, H.W., Liu, Z.X.: An efficient Barzilai–Borwein conjugate gradient method for unconstrained optimization. J. Optim. Theory Appl. 181(2), 608–633 (2019)

    Article  MathSciNet  Google Scholar 

  12. Dai, Y.H., Han, J.Y., Liu, G.H., et al.: Convergence properties of nonlinear conjugate gradient methods. SIAM J. Optim. 10(2), 345–358 (1999)

    Article  MathSciNet  Google Scholar 

  13. Dai, Y.H., Liao, L.Z.: New conjugacy conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43(1), 87–101 (2001)

    Article  MathSciNet  Google Scholar 

  14. Hager, W.W., Zhang, H.C.: A survey of nonlinear conjugate gradient methods. Pac. J. Optim. 2(1), 35–58 (2006)

    MathSciNet  MATH  Google Scholar 

  15. Dai, Y.H., Yuan, Y.X.: Nonlinear Conjugate Gradient Methods. Shanghai Scientific and Technical Publishers, Shanghai (2000)

    Google Scholar 

  16. Hager, W.W., Zhang, H.C.: The limited memory conjugate gradient method. SIAM J. Optim. 23(4), 2150–2168 (2013)

    Article  MathSciNet  Google Scholar 

  17. Schmidt, E.: Über die Auflösung linearer Gleichungen mit Unendlich vielen unbekannten. Rend. Circ. Mat. Palermo. Ser. 1(25), 53–77 (1908)

    Article  Google Scholar 

  18. Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45(1–3), 503–528 (1989)

    Article  MathSciNet  Google Scholar 

  19. Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (1999)

    Book  Google Scholar 

  20. Hager, W.W., Zhang, H.C.: Algorithm 851:conjugate gradient CG\(\_ \)DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32(1), 113–137 (2006)

    Article  Google Scholar 

  21. Biglari, F., Hassan, M.A., Leong, W.J.: New quasi-Newton methods via higher order tensor models. J. Comput. Appl. Math. 235(8), 2412–2422 (2011)

    Article  MathSciNet  Google Scholar 

  22. Wei, Z.X., Li, G.Y., Qi, L.Q.: New quasi-Newton methods for unconstrained optimization problems. Appl. Math. Comput. 175(2), 1156–1188 (2006)

    MathSciNet  MATH  Google Scholar 

  23. Li, D.H., Fukushima, M.: On the global convergence of BFGS method for nonconvex unconstrained optimization problems. SIAM J. Optim. 11(4), 1054–1064 (2001)

    Article  MathSciNet  Google Scholar 

  24. Yuan, Y.X.: A modified BFGS algorithm for unconstrained optimization. IMA J. Numer. Anal. 11(3), 325–332 (1991)

    Article  MathSciNet  Google Scholar 

  25. Dai, Y.H., Yuan, J.Y., Yuan, Y.X.: Modified two-point stepsize gradient methods for unconstrained optimization problems. Comput. Optim. Appl. 22(1), 103–109 (2002)

    Article  MathSciNet  Google Scholar 

  26. Liu, Z.X., Liu, H.W.: An efficient gradient method with approximate optimal stepsize for large-scale unconstrained optimization. Numer. Algorithms 78(1), 21–39 (2018)

    Article  MathSciNet  Google Scholar 

  27. Liu, Z.X., Liu, H.W.: An efficient gradient method with approximately optimal stepsize based on tensor model for unconstrained optimization. J. Optim. Theory Appl. 181(2), 608–633 (2019)

    Article  MathSciNet  Google Scholar 

  28. Tarzanagh, D.A., Reza Peyghami, M.: A new regularized limited memory BFGS-type method based on modified secant conditions for unconstrained optimization problems. J. Glob. Optim. 63, 709–728 (2015)

    Article  MathSciNet  Google Scholar 

  29. Barzilai, J., Borwein, J.M.: Two-point step size gradient methods. IMA J. Numer. Anal. 8(1), 141–148 (1988)

    Article  MathSciNet  Google Scholar 

  30. Andrei, N.: Open problems in nonlinear conjugate gradient algorithms for unconstrained optimization. Bull. Malays. Math. Sci. Soc. 34(2), 319–330 (2011)

    MathSciNet  MATH  Google Scholar 

  31. Gould, N.I.M., Orban, D., Toint, P.L.: CUTEr and SifDec: a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29(4), 373–394 (2003)

    Article  Google Scholar 

  32. Dolan, E.D., More, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We would like to thank the anonymous referees for their useful comments. We also would like to thank professors Hager, W. W. and Zhang, H. C. for their C code of CG\(\_ \)DESCENT (6.8). The third author’s work was partly supported by the Chinese NSF grants (Nos. 11631013 and 11971372) and Key Project of Chinese National Programs for Fundamental Research and Development (No. 2015CB856002). The first author’s work was supported by the National Natural Science Foundation of China (no. 11901561) and the Natural Science Foundation of Guangxi(No. 2018GXNSFBA281180).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu-Hong Dai.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, Z., Liu, H. & Dai, YH. An improved Dai–Kou conjugate gradient algorithm for unconstrained optimization. Comput Optim Appl 75, 145–167 (2020). https://doi.org/10.1007/s10589-019-00143-4

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-019-00143-4

Keywords

Mathematics Subject Classification

Navigation