Abstract
An accelerated hybrid conjugate gradient algorithm represents the subject of this paper. The parameter β k is computed as a convex combination of \(\beta_k^{HS}\) (Hestenes and Stiefel, J Res Nat Bur Stand 49:409–436, 1952) and \(\beta_k^{DY}\) (Dai and Yuan, SIAM J Optim 10:177–182, 1999), i.e. \(\beta_k^C =\left({1-\theta_k}\right)\beta_k^{HS} + \theta_k \beta_k^{DY}\). The parameter θ k in the convex combinaztion is computed in such a way the direction corresponding to the conjugate gradient algorithm is the best direction we know, i.e. the Newton direction, while the pair (s k , y k ) satisfies the modified secant condition given by Li et al. (J Comput Appl Math 202:523–539, 2007) B k + 1 s k = z k , where \(z_k =y_k +\left({{\eta_k} / {\left\| {s_k} \right\|^2}} \right)s_k\), \(\eta_k =2\left( {f_k -f_{k+1}} \right)+\left( {g_k +g_{k+1}} \right)^Ts_k\), s k = x k + 1 − x k and y k = g k + 1 − g k . It is shown that both for uniformly convex functions and for general nonlinear functions the algorithm with strong Wolfe line search is globally convergent. The algorithm uses an acceleration scheme modifying the steplength α k for improving the reduction of the function values along the iterations. Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms a variant of the hybrid conjugate gradient algorithm given by Andrei (Numer Algorithms 47:143–156, 2008), in which the pair (s k , y k ) satisfies the classical secant condition B k + 1 s k = y k , as well as some other conjugate gradient algorithms including Hestenes-Stiefel, Dai-Yuan, Polack-Ribière-Polyak, Liu-Storey, hybrid Dai-Yuan, Gilbert-Nocedal etc. A set of 75 unconstrained optimization problems with 10 different dimensions is being used (Andrei, Adv Model Optim 10:147–161, 2008).
Similar content being viewed by others
References
Andrei, N.: Numerical comparison of conjugate gradient algorithms for unconstrained optimization. Stud. Inform. Control 16, 333–352 (2007)
Andrei, N.: An unconstrained optimization test functions collection. Adv. Model. Optim. 10, 147–161 (2008)
Andrei, N.: Another hybrid conjugate gradient algorithm for unconstrained optimization. Numer. Algorithms 47, 143–156 (2008)
Andrei, N.: A hybrid conjugate gradient algorithm for unconstrained optimization. JOTA 141, 249–264 (2009)
Andrei, N.: Performance profiles of conjugate gradient algorithms for unconstrained optimization. In: Floudas, C.A., Pardalos, P.M. (eds.) Encyclopedia of Optimization, 2nd edn., vol. P, pp. 2938–2953. Springer, New York (2009)
Andrei, N.: Acceleration of conjugate gradient algorithms for unconstrained optimization. Appl. Math. Comput. 213, 361–369 (2009)
Aris, R.: The Mathematical Theory of Diffusion and Reaction in Permeable Catalysts. Oxford (1975)
Averick, B.M., Carter, R.G., Moré, J.J., Xue, G.L.: The MINPACK-2 Test Problem Collection. Mathematics and Computer Science Division, Argonne National Laboratory, Preprint MCS-P153–0692 (1992)
Bongartz, I., Conn, A.R., Gould, N.I.M., Toint, P.L.: CUTE: constrained and unconstrained testing environments. ACM Trans. Math. Softw. 21, 123–160 (1995)
Cimatti, G.: On a problem of the theory of lubrication governed by a variational inequality. Appl. Math. Optim. 3, 227–242 (1977)
Dai, Y.H.: New properties of a nonlinear conjugate gradient method. Numer. Math. 89, 83–98 (2001)
Dai, Y.H., Liao, L.Z.: New conjugacy conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43, 87–101 (2001)
Dai, Y.H., Yuan, Y.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10, 177–182 (1999)
Dai, Y.H., Yuan, Y.: An efficient hybrid conjugate gradient method for unconstrained optimization. Ann. Oper. Res. 103, 33–47 (2001)
Dai, Y.H., Han, J.Y., Liu, G.H., Sun, D.F., Yin, X., Yuan, Y.: Convergence properties of nonlinear conjugate gradient methods. SIAM J. Optim. 10, 348–358 (1999)
Daniel, J.W.: The conjugate gradient method for linear and nonlinear operator equations. SIAM J. Numer. Anal. 4, 10–26 (1967)
Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)
Fletcher, R.: Practical Methods of Optimization, vol. 1: Unconstrained Optimization. Wiley, New York (1987)
Fletcher, R., Reeves, C.: Function minimization by conjugate gradients. Comput. J. 7, 149–154 (1964)
Gilbert, J.C., Nocedal, J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2, 21–42 (1992)
Glowinski, R.: Numerical Methods for Nonlinear Variational Problems. Springer, Berlin (1984)
Goodman, J., Kohn, R., Reyna, L.: Numerical study of a relaxed variational problem from optimal design. Comput. Methods Appl. Mech. Eng. 57, 107–127 (1986)
Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16, 170–192 (2005)
Hager, W.W., Zhang, H.: A survey of nonlinear conjugate gradient methods. Pacific J. Optim. 2, 35–58 (2006)
Hestenes, M.R., Stiefel, E.L.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49, 409–436 (1952)
Hu, Y.F., Storey, C.: Global convergence result for conjugate gradient methods. J. Optim. Theory Appl. 71, 399–405 (1991)
Li, G., Tang, C., Wei, Z.: New conjugacy condition and related new conjugate gradient methods for unconstrained optimization. J. Comput. Appl. Math. 202, 523–539 (2007)
Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45, 503–528 (1989)
Liu, Y., Storey, C.: Efficient generalized conjugate gradient algorithms, part 1: theory. JOTA 69, 129–137 (1991)
Nitsche, J.C.C.: Lectures on Minimal Surfaces, vol. 1. Cambridge University Press (1989)
Nocedal, J.: Conjugate gradient methods and nonlinear optimization. In: Adams, L., Nazareth, J.L. (eds.) Linear and Nonlinear Conjugate Gradient—Related Methods, pp. 9–23. SIAM, Philadelphia (1996)
Perry, A.: A modified conjugate gradient algorithm. Discussion Paper no. 229, Center for Mathematical Studies in Economics and Management Science, Northwestern University (1976)
Polak, E., Ribière, G.: Note Sur la Convergence de Directions Conjuguée, vol.16, pp. 35–43. Rev. Francaise Informat Recherche Operationelle, 3e Année (1969)
Polyak, B.T.: The conjugate gradient method in extreme problems. USSR Comput. Math. Math. Phys. 9, 94–112 (1969)
Powell, M.J.D.: Restart procedures for the conjugate gradient method. Math. Program. 12, 241–254 (1977)
Powell, M.J.D.: Nonconvex minimization calculations and the conjugate gradient method. In: Numerical Analysis (Dundee, 1983). Lecture Notes in Mathematics, vol. 1066, pp. 122–141. Springer, Berlin (1984)
Touati-Ahmed, D., Storey, C.: Efficient hybrid conjugate gradient techniques. J. Optim. Theory Appl. 64, 379–397 (1990)
Wolfe, P.: Convergence conditions for ascent methods. SIAM Rev. 11, 226–235 (1969)
Zhang, J.Z., Deng, N.Y., Chen, L.H.: New quasi-Newton equation and related methods for unconstrained optimization. J. Optim. Theory Appl. 102, 147–167 (1999)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Andrei, N. Accelerated hybrid conjugate gradient algorithm with modified secant condition for unconstrained optimization. Numer Algor 54, 23–46 (2010). https://doi.org/10.1007/s11075-009-9321-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-009-9321-0
Keywords
- Unconstrained optimization
- Hybrid conjugate gradient method
- Newton direction
- Modified secant condition
- Numerical comparisons