Skip to main content
Log in

A new three-term conjugate gradient algorithm for unconstrained optimization

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

A new three-term conjugate gradient algorithm which satisfies both the descent condition and the conjugacy condition is presented. The algorithm is obtained by minimization the one-parameter quadratic model of the objective function in which the symmetrical approximation of the Hessian matrix satisfies the general quasi-Newton equation. The search direction is obtained by symmetrization of the iteration matrix corresponding to the solution of the quadratic model minimization. Using the general quasi-Newton equation the search direction includes a parameter which is determined by the minimization of the condition number of the iteration matrix. It is proved that this direction satisfies both the conjugacy and the descent condition. The new approximation of the minimum is obtained by the general Wolfe line search using by now a standard acceleration technique. Under standard assumptions, for uniformly convex functions the global convergence of the algorithm is proved. The numerical experiments using 800 large-scale unconstrained optimization test problems show that minimization of the condition number of the iteration matrix lead us to a value of the parameter in the search direction able to define a competitive three-term conjugate gradient algorithm. Numerical comparisons of this variant of the algorithm versus known conjugate gradient algorithms ASCALCG, CONMIN, TTCG and THREECG, as well as the limited memory quasi-Newton algorithm LBFGS (m = 5) and the truncated Newton TN show that our algorithm is indeed more efficient and more robust.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Al-Bayati, A.Y., Sharif, W.H.: A new three-term conjugate gradient method for unconstrained optimization. Can. J. Sci. Eng. Math. 1(5), 108–124 (2010)

    Google Scholar 

  2. Andrei, N.: Scaled conjugate gradient algorithms for unconstrained optimization. Comput. Optim. Appl. 38, 401–416 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  3. Andrei, N.: Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization. Optimi. Methods Softw. 22, 561–571 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  4. Andrei, N.: Another Collection of Large-scale Unconstrained Optimization Test Functions. ICI Technical Report, January 30 (2013)

  5. Andrei, N.: Acceleration of conjugate gradient algorithms for unconstrained optimization. Appl. Math. Comput. 213, 361–369 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  6. Andrei, N.: A modified Polak-Ribière-Polyak conjugate gradient algorithm for unconstrained optimization. Optimization 60, 1457–1471 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  7. Andrei, N.: On three-term conjugate gradient algorithms for unconstrained optimization. Appl. Math. Comput. 219, 6316–6327 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  8. Andrei, N.: A simple three-term conjugate gradient algorithm for unconstrained optimization. J. Comput. Appl. Math. 241, 19–29 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  9. Beale, E.M.L.: A derivative of conjugate gradients In: Lootsma, F.A (ed.) Numerical Methods for Nonlinear Optimization, pp. 39–43. Academic, London (1972)

    Google Scholar 

  10. Cheng, W.: A two-term PRP-based descent method. Numer. Funct. Anal. Optim. 28, 1217–1230 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  11. Dai, Y.H., Kou, C.X.: A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search. SIAMJ. Optim. 23(1), 296–320 (2013)

    MATH  MathSciNet  Google Scholar 

  12. Dai, Y.H., Liao, L.Z.: New conjugate conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43, 87–101 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  13. Dai, Y.H., Yuan, Y.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10, 177–182 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  14. Deng, N.Y., Li, Z.: Global convergence of three terms conjugate gradient methods. Optim. Method Softw. 4, 273–282 (1995)

    Article  Google Scholar 

  15. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  16. Gilbert, J.C., Lemaréchal, C.: Some numerical experiments with variable storage quasi-Newton algorithm. Math. Program. 45, 407–435 (1989)

    Article  MATH  Google Scholar 

  17. Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16, 170–192 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  18. Hestenes, M.R., Stiefel, E.L.: Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Stand. 49, 409–436 (1952)

    Article  MATH  MathSciNet  Google Scholar 

  19. Liu, D.C., Nocedal, J.: On the limited BFGS method for large scale optimization. Math. Program. 45, 503–528 (1989)

    Article  MATH  MathSciNet  Google Scholar 

  20. Liu, D., Xu, G.: Symmetric Perry conjugate gradient method. Comput. Optim. Appl. 56, 317–341 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  21. McGuire, M.F., Wolfe, P.: Evaluating a Restart Procedure for Conjugate Gradients. Report RC-4382, IBM Research Center, Yorktown Heights (1973)

  22. Moré, J.J., Thuente, D.J.: On Linesearch Algorithms with Guaranteed Sufficient Decrease. Mathematics and Computer Science Division Preprint MCS-P153-0590. Argonne National Laboratory, Argonne, IL (1990)

  23. Narushima, Y., Yabe, H., Ford, J.A.: A three-term conjugate gradient method with sufficient descent property for unconstrained optimization. SIAM J. Optim. 21, 212–230 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  24. Nash, S.G.: User’s Guide for TN-TNBC: Fortran Routines for Nonlinear Optimization. Report 397, Mathematical Sciences Department, The John Hopkins University, Baltimore

  25. Nash, S.G., Nocedal, J.: A numerical study of the limited memory BFGS method and the truncated-Newton method for large scale optimization. SIAM J. Optim. 1, 358–372 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  26. Nazareth, L.: A conjugate direction algorithm without line search. J. Optim. Theor. Appl. 23, 373–387 (1977)

    Article  MATH  MathSciNet  Google Scholar 

  27. Nocedal, J.: Updating quasi-Newton matrices with limited storage. Math. Comp. 35, 773–782 (1980)

    Article  MATH  MathSciNet  Google Scholar 

  28. Nocedal, J.: Conjugate gradient methods and nonlinear optimization. In: Adams, L., Nazareth, J.L. (eds.) Linear and Nonlinear Conjugate Gradient Related Methods, pp. 9-23. SIAM (1996)

  29. Perry, A.: A modified conjugate gradient algorithm. Oper. Res. Tech. Notes 26(6), 1073–1078 (1978)

    Article  MATH  MathSciNet  Google Scholar 

  30. Powell, M.J.D.: Nonconvex minimization calculations and the conjugate gradient method. Numerical Analysis (Dundee, 1983), Lecture Notes in Mathematics, vol. 1066, pp. 122-141. Springer, Berlin (1984)

    Google Scholar 

  31. Shanno, D.F., Phua, K.H.: Algorithm 500, Minimization of unconstrained multivariate functions. ACM Trans. Math. Soft. 2, 87–94 (1976)

    Article  MATH  Google Scholar 

  32. Sugiki, K., Narushima, Y., Yabe, H.: Globally convergent three-term conjugate gradient methods that use secant conditions and generate descent search directions for unconstrained optimization. J. Optim. Theory Appl. 153(3), 733–757 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  33. Wolfe, P.: Convergence conditions for ascent methods. SIAM Rev. 11, 226–235 (1968)

    Article  MathSciNet  Google Scholar 

  34. Wolfe, P.: Convergence conditions for ascent methods, (II): some corrections. SIAM Rev. 13, 185–188 (1971)

    Article  MATH  MathSciNet  Google Scholar 

  35. Zhang, L., Zhou, W., Li, D.H.: A descent modified Polak-Ribière-Polyak conjugate gradient method and its global convergence. IMA J. Numer. Anal. 26, 629–640 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  36. Zhang, L., Zhou, W., Li, D.H.: Some descent three-term conjugate gradient methods and their global convergence. Optim. Methods Softw. 22, 697–711 (2007)

    Article  MathSciNet  Google Scholar 

  37. Zhang, J., Xiao, Y., Wei, Z.: Nonlinear conjugate gradient methods with sufficient descent condition for large-scale unconstrained optimization. Math. Probl. Eng. 2009 (2009). Article ID 243290 doi:10.1155/2009/243290

  38. Zoutendijk, G.: Nonlinear programming, computational methods. In: Abadie, J. (ed.) Integer and Nonlinear Programming, pp. 38-86. North-Holland, Amsterdam (1970)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neculai Andrei.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Andrei, N. A new three-term conjugate gradient algorithm for unconstrained optimization. Numer Algor 68, 305–321 (2015). https://doi.org/10.1007/s11075-014-9845-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-014-9845-9

Keywords

Navigation