Skip to main content
Log in

Globally convergent conjugate gradient algorithms

  • Published:
Mathematical Programming Submit manuscript

Abstract

The paper examines a method of implementing an angle test for determining when to restart conjugate gradient methods in a steepest descent direction. The test is based on guaranteeing that the cosine of the angle between the search direction and the negative gradient is within a constant multiple of the cosine of the angle between the Fletcher-Reeves search direction and the negative gradient. This guarantees convergence, for the Fletcher-Reeves method is known to converge. Numerical results indicate little, if anything, is lost in efficiency, and indicate gains may well be possible for large problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. E.M.L. Beale, “A derivation of conjugate gradients”, in: F.A. Lootsma, ed.,Numerical methods for nonlinear optimization (Academic Press, London, 1972) pp. 39–43.

    Google Scholar 

  2. R. Fletcher and C.M. Reeves, “Function minimization by conjugate gradients”,Computer Journal 7 (1964) 149–154.

    Google Scholar 

  3. M.R. Hestenes and E. Stiefel, “Methods of conjugate gradients for solving linear systems”,Journal of Research of the National Bureau of Standards, Sec. B 48 (1952) 409–436.

    Google Scholar 

  4. J.J. Moré, B.S. Garbow and K.E. Hillstrom, “Testing unconstrained minimization software”, TM-324, Applied Mathematics Division, Argonne National Laboratory, Argonne, IL (1978).

    Google Scholar 

  5. E. Polak and G. Ribière, “Note sur la convergence de méthodes de directions conjuguées”, RAIRO 16 (1969) 35–43.

    Google Scholar 

  6. M.J.D. Powell, “Restart procedures for the conjugate gradient method,Mathematical Programming 12 (1977) 241–254.

    Google Scholar 

  7. M.J.D. Powell, “Nonconvex minimization calculations and the conjugate gradient method”, DAMTP 1983/NA 14, Dept. of Applied Mathematics and Theoretical Physics, University of Cambridge (Cambridge, England, 1983).

    Google Scholar 

  8. D.F. Shanno, “Conjugate gradient methods with inexact searches”,Mathematics of Operations Research 3 (1978) 244–256.

    Google Scholar 

  9. D.F. Shanno, “On the convergence of a new conjugate gradient algorithm”,SIAM Journal of Numerical Analysis 15 (1978) 1247–1257.

    Google Scholar 

  10. D.F. Shanno and K.H. Phua, “Remark on algorithm 500”,Transactions on Mathematical Software 6 (1980) 618–622.

    Google Scholar 

  11. G. Zoutendijk, “Nonlinear programming, computational methods”, in: J. Abadie, ed.,Integer and nonlinear programming (North-Holland, Amsterdam, 1970) pp. 37–86.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Shanno, D.F. Globally convergent conjugate gradient algorithms. Mathematical Programming 33, 61–67 (1985). https://doi.org/10.1007/BF01582011

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01582011

Key words

Navigation