Abstract
In this paper, we present a new supermemory gradient method without line search for unconstrained optimization problems. The new method can guarantee a descent at each iteration. It sufficiently uses the previous multi-step iterative information at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so that it is suitable to solve large scale optimization problems. We also prove its global convergence under some mild conditions. In addition, We analyze the linear convergence rate of the new method when the objective function is uniformly convex and twice continuously differentiable.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Dai, Y.H., Yuan, Y.X.: Convergence Properties of the Fletcher-Reeves Method. IMA J. Numer. Anal. 16, 155–164 (1996)
Hu, Y.F., Storey, C.: Global Convergence Result for Conjugate Gradient Methods. JOTA 71, 399–405 (1991)
Grippo, L., Lucidi, S.: A Globally Convergent Version of the Polak-Ribiere Conjugate Gradient Method. Math. Prog. 78, 375–391 (1997)
Fletcher, R.: Practical Methods of Optimization. Unconstrained Optimization, vol. 1. John Wiley Sons, New York (1987)
Dai, Y.H., Yuan, Y.X.: Convergence Properties of the Conjugate Descent Method. Advances in Mathematics 25, 552–562 (1996)
Dai, Y.H., Yuan, Y.X.: Nonlinear Conjugate Gradient Methods. Shanghai Scientific and Technical Publishers (2000)
Dai, Y.H., Yuan, Y.X.: A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property. SIAM J. Optim. 10, 177–182 (1999)
Gilbert, J.C., Nocedal, J.: Global Convergence Properties of Conjugate Gradient Methods for Optimization. SIAM J. Optim. 2, 21–42 (1992)
Wolfe, M.A., Viazminsky, C.: Supermemory Descent Methods for Unconstrained Minimization. J. Optim. Theory Appl. 18, 455–468 (1976)
Zhenjun, S.: Supermemory Gradient Method for Unconstrained Optimization. J. Eng. Math. 17, 99–104 (2000)
Zhenjun, S.: A New Memory Gradient under Exact Line Search. Asia-Pacific J. Oper. Res. 20, 275–284 (2003)
Zhenjun, S.: A New Supermemory Gradient Method for Unconstrained Optimization. Advances in Mathematics 35, 265–274 (2006)
Sun, J., Zhang, J.: Global Convergence of Conjugate Gradient Methods without Line Search. Annals of Operations Research 103, 161–173 (2001)
Chen, X., Sun, J.: Global Convergence of a Two-parameter Family of Conjugate Gradient Methods without Line Search. Journal of Computational and Applied Mathematics 146, 37–45 (2002)
Li, X., Chen, X.: Global Convergence of Shortest-residual Family of Conjugate Gradient Methods without Line Search. Asia-Pacific Journal of Operational Research 22, 529–538 (2005)
Narushima, Y.: A Memory Gradient Method without Line Search for Unconstrained Optimization. SUT Journal of Mathematics 42, 191–206 (2006)
Yu, Z.S.: Global Convergence of a Memory Gradient Method without Line Search. J. Appl. Math. Comput. 26, 545–553 (2008)
Cohen, A.I.: Stepsize Analysis for Descent Methods. J. Optim. Theory Appl. 33, 187–205 (1981)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Liu, J., Liu, H., Zheng, Y. (2009). A New Supermemory Gradient Method without Line Search for Unconstrained Optimization. In: Wang, H., Shen, Y., Huang, T., Zeng, Z. (eds) The Sixth International Symposium on Neural Networks (ISNN 2009). Advances in Intelligent and Soft Computing, vol 56. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01216-7_68
Download citation
DOI: https://doi.org/10.1007/978-3-642-01216-7_68
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-01215-0
Online ISBN: 978-3-642-01216-7
eBook Packages: EngineeringEngineering (R0)