Skip to main content
Log in

A note on the global convergence theorem of the scaled conjugate gradient algorithms proposed by Andrei

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

In (Andrei, Comput. Optim. Appl. 38:402–416, 2007), the efficient scaled conjugate gradient algorithm SCALCG is proposed for solving unconstrained optimization problems. However, due to a wrong inequality used in (Andrei, Comput. Optim. Appl. 38:402–416, 2007) to show the sufficient descent property for the search directions of SCALCG, the proof of Theorem 2, the global convergence theorem of SCALCG, is incorrect. Here, in order to complete the proof of Theorem 2 in (Andrei, Comput. Optim. Appl. 38:402–416, 2007), we show that the search directions of SCALCG satisfy the sufficient descent condition. It is remarkable that the convergence analyses in (Andrei, Optim. Methods Softw. 22:561–571, 2007; Eur. J. Oper. Res. 204:410–420, 2010) should be revised similarly.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Andrei, N.: Scaled conjugate gradient algorithms for unconstrained optimization. Comput. Optim. Appl. 38(3), 401–416 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  2. Andrei, N.: Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization. Optim. Methods Softw. 22(4), 561–571 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  3. Andrei, N.: Accelerated scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization. Eur. J. Oper. Res. 204(3), 410–420 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  4. Barzilai, J., Borwein, J.M.: Two-point stepsize gradient methods. IMA J. Numer. Anal. 8(1), 141–148 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  5. Birgin, E., Martínez, J.M.: A spectral conjugate gradient method for unconstrained optimization. Appl. Math. Optim. 43, 117–128 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  6. Dai, Y.H., Han, J.Y., Liu, G.H., Sun, D.F., Yin, H.X., Yuan, Y.X.: Convergence properties of nonlinear conjugate gradient methods. SIAM J. Optim. 10(2), 348–358 (1999)

    MathSciNet  Google Scholar 

  7. Shanno, D.F.: Conjugate gradient methods with inexact searches. Math. Oper. Res. 3(3), 244–256 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  8. Sun, W., Yuan, Y.X.: Optimization Theory and Methods: Nonlinear Programming. Springer, New York (2006)

    MATH  Google Scholar 

  9. Wolfe, P.: Convergence conditions for ascent methods. SIAM Rev. 11, 226–235 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  10. Wolfe, P.: Convergence conditions for ascent methods, II: Some corrections. SIAM Rev. 13, 185–188 (1971)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saman Babaie-Kafaki.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Babaie-Kafaki, S. A note on the global convergence theorem of the scaled conjugate gradient algorithms proposed by Andrei. Comput Optim Appl 52, 409–414 (2012). https://doi.org/10.1007/s10589-011-9413-3

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-011-9413-3

Keywords

Navigation