Abstract
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. In this paper, a new family of conjugate gradient method is proposed for unconstrained optimization. This method includes the already existing two practical nonlinear conjugate gradient methods, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the Wolfe conditions. The numerical experiments are done to test the efficiency of the new method, which implies the new method is promising. In addition the methods related to this family are uniformly discussed.
Similar content being viewed by others
References
Al-Baali, M. (1985). Descent property and global convergence of the fletcherreeves method with inexact line search. IMA Journal of Numerical Analysis, 5(1), 121–124.
Al-Baali, M. (1985). New property and global convergence of the fletcher-reeves method with inexact line searches. IMA Journal of Numerical Analysis, 5(1), 122–124.
Dai, Y. H., & Yuan, Y. (1996). Convergence properties of the fletcher-reeves method. IMA Journal of Numerical Analysis, 16(2), 155–164.
Dai, Y. H., & Yuan, Y. (2001). An efficient hybrid conjugate gradient method for unconstrained optimization. Annals of Operations Research, 103(1–4), 33–47.
Dai, Y.-H., & Yuan, Y. (1999). A nonlinear conjugate gradient method with a strong global convergence property. SIAM Journal on Optimization, 10(1), 177–182.
Dai, Y., Han, J., Liu, G., Sun, D., Yin, H., & Yuan, Y.-X. (2000). Convergence properties of nonlinear conjugate gradient methods. SIAM Journal on Optimization, 10(2), 345–358.
Dai, Y., & Yuan, Y. (2003). A class of globally convergent conjugate gradient methods. Science in China Series A: Mathematics, 46(2), 251–261.
Fletcher, R., & Reeves, C. M. (1964). Function minimization by conjugate gradients. The computer Journal, 7(2), 149–154.
Fletcher, R. (1987). Practical methods of optimization. Hoboken: Wiley.
Gilbert, J. C., & Nocedal, J. (1992). Global convergence properties of conjugate gradient methods for optimization. SIAM Journal on Optimization, 2(1), 21–42.
Hestenes, M. R., & Stiefel, E. (1952). Methods of conjugate gradients for solving linear systems. Journal of Research of the National Standards, 49, 409–436.
Hu, Y. F., & Storey, C. (1991). Global convergence result for conjugate gradient methods. Journal of Optimization Theory and Applications, 71(2), 399–405.
Hillstrome, K. E., More, J. J., & Garbow, B. S. (1981). Testing unconstrained optimization software. Mathematical Software, 7, 17–41.
Polyak, B. T. (1969). The conjugate gradient method in extremal problems. USSR Computational Mathematics and Mathematical Physics, 9(4), 94–112.
Pu, D., & Yu, W. (1990). On the convergence property of the dfp algorithm. Annals of Operations Research, 24(1), 175–184.
Sellami, B., Laskri, Y., & Benzine, R. (2015). A new two-parameter family of nonlinear conjugate gradient methods. Optimization, 64(4), 993–1009.
Shanno, D. F. (1978). Conjugate gradient methods with inexact searches. Mathematics of Operations Research, 3(3), 244–256.
Zoutendijk, G. (1970). Nonlinear programming, computational methods. Integer and Nonlinear Programming, 143(1), 37–86.
Acknowledgments
We would like to thank to Professor Paul Armand (University of Limoges, France), who has always been generous with his time and advice.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The authors declare that they have no conflict of interest.
Rights and permissions
About this article
Cite this article
Sellami, B., Chaib, Y. A new family of globally convergent conjugate gradient methods. Ann Oper Res 241, 497–513 (2016). https://doi.org/10.1007/s10479-016-2120-9
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10479-016-2120-9