Skip to main content
Log in

A new descent memory gradient method and its global convergence

  • Published:
Journal of Systems Science and Complexity Aims and scope Submit manuscript

Abstract

In this article, a new descent memory gradient method without restarts is proposed for solving large scale unconstrained optimization problems. The method has the following attractive properties: 1) The search direction is always a sufficiently descent direction at every iteration without the line search used; 2) The search direction always satisfies the angle property, which is independent of the convexity of the objective function. Under mild conditions, the authors prove that the proposed method has global convergence, and its convergence rate is also investigated. The numerical results show that the new descent memory method is efficient for the given test problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Y. X. Yuan and W. Y. Sun, Theory and Methods of Optimization, Science Press of China, Beijing, 2002.

    Google Scholar 

  2. Z. J. Shi, A new memory gradient method under exact line search, Asia-Pacific J. Oper. Res., 2003, 20: 275–284.

    MathSciNet  MATH  Google Scholar 

  3. Z. J. Shi, A new super-memory gradient method for unconstrained optimization, Advance in Mathematics, 2006, 35: 265–274.

    Google Scholar 

  4. L. Zhang, W. J. Zhou, and D. H. Li, A descent modified Polak-Ribière-Polyak conjugate gradient method and its global convergence, IMA Journal of Numerical Analysis, 2006, 26: 629–640.

    Article  MathSciNet  MATH  Google Scholar 

  5. L. Zhang, W. J. Zhou, and D. H. Li, Global convergence of a modified Fletcher-Reeves conjugate method with Armijo-type line search, Numerische Mathematik, 2006, 104: 561–572.

    Article  MathSciNet  MATH  Google Scholar 

  6. Z. J. Shi and J. Shen, Convergence of Liu-Storey conjugate gradient method, European Journal of Operational Research, 2007, 182: 552–560.

    Article  MathSciNet  MATH  Google Scholar 

  7. W. Y. Cheng, A two-term PRP-based descent method, Numerical Functional Analysis and Optimization, 2007, 28: 1217–1230.

    Article  MathSciNet  MATH  Google Scholar 

  8. Z. J. Shi and J. Shen, Convergence of the Polak-Ribière-Polyak conjugate method, Nonlinear Analysis, 2007, 66: 1428–1441.

    Article  MathSciNet  MATH  Google Scholar 

  9. M. R. Hestenes and E. Stiefel, Methods of conjugate gradient for solving linear systems, Journal of Research of the National Bureau of Standards, 1952, 49: 409–436.

    MathSciNet  MATH  Google Scholar 

  10. R. Fletcher and C. Reeves, Function minimization by conjugate gradients, Computer Journal, 1964, 7: 149–154.

    Article  MathSciNet  MATH  Google Scholar 

  11. E. Polyak and G. Ribière, Note Sur la convergence de méthods de directions conjugées, Revue Francaise d’Informatique et de Recherche Opérationnelle, 1969, 16: 35–43.

    Google Scholar 

  12. B. T. Polyak, The conjugate gradient method in extreme problems, USSR Computational Mathematics and Mathematical Physics, 1969, 9: 94–112.

    Article  Google Scholar 

  13. Y. H. Dai and Y. X. Yuan, A nonlinear conjugate gradient with a strong global convergence properties, SIAM Journal on Optimization, 2000, 10: 177–182.

    Article  Google Scholar 

  14. Z. X. Wei, S. W. Yao, and L. Y. Liu, The convergence properties of some new conjugate gradient methods, Applied Mathematics and Computation, 2006, 183: 1341–1350.

    Article  MathSciNet  MATH  Google Scholar 

  15. G. H. Yu, L. T. Guan, and Z. X. Wei, A globally convergent Polak-Ribière-Polyak conjugate method with Armijo-type line search, Numerical Mathematics, 2006, 15: 357–366.

    MathSciNet  MATH  Google Scholar 

  16. Q. Liu, C. Y. Wang, and X. M. Yang, On the convergence of a new hybrid projection algorithm, Journal of Systems Science & Complexity, 2006, 19(13): 423–430.

    Article  MathSciNet  MATH  Google Scholar 

  17. S. J. Lian, C. Y. Wang, and L. X. Cao, Convergence properties of the dependent PRP conjugate gradient methods, Journal of Systems Science & Complexity, 2006, 19(2): 288–296.

    Article  MathSciNet  MATH  Google Scholar 

  18. N. Andrei, A Dai-Yuan conjugate gradient algorithm with sufficient descent and conjugacy conditions for unconstrained optimization, Applied Mathematics Letters, 2008, 21: 165–171.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Min Sun.

Additional information

This research is supported by the National Science Foundation of China under Grant No. 70971076 and the Foundation of Shandong Provincial Education Department under Grant No. J10LA59.

This paper was recommended for publication by Editor Shouyang WANG.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Sun, M., Bai, Q. A new descent memory gradient method and its global convergence. J Syst Sci Complex 24, 784–794 (2011). https://doi.org/10.1007/s11424-011-8150-0

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11424-011-8150-0

Key words

Navigation