Abstract
Supposez ∈ En is a solution to the optimization problem minimizeF(x) s.t.x ∈ En and an algorithm is available which iteratively constructs a sequence of search directions {s j } and points {x j } with the property thatx j →z. A method is presented to accelerate the rate of convergence of {x j } toz provided that n consecutive search directions are linearly independent. The accelerating method uses n iterations of the underlying optimization algorithm. This is followed by a special step and then another n iterations of the underlying algorithm followed by a second special step. This pattern is then repeated. It is shown that a superlinear rate of convergence applies to the points determined by the special step. The special step which uses only first derivative information consists of the computation of a search direction and a step size. After a certain number of iterations a step size of one will always be used. The acceleration method is applied to the projection method of conjugate directions and the resulting algorithm is shown to have an (n + 1)-step cubic rate of convergence. The acceleration method is based on the work of Best and Ritter [2].
Similar content being viewed by others
References
M.J. Best, “A feasible conjugate direction method to solve linearly constrained minimization problems”,Journal of Optimization Theory and Applications, to appear.
M.J. Best and K. Ritter, “An accelerated conjugate direction method to solve linearly constrained minimization problems”,Journal of Computer and Systems Sciences, to appear.
L.C.W. Dixon, “Conjugate directions without linear searches”,Journal of the Institute of Mathematics and its Applications 11 (1973) 317–328.
L.C.W. Dixon, “Conjugate gradient algorithms: quadratic termination without linear searches”, Rept. No. 38, The Hatfield Polytechnic, Numerical Optimization Centre (1972).
H.Y. Huang, “Method of dual matrices for function minimization”, Aero-Astronautics Rept. No. 88, Rice University (1972).
H.Y. Huang and J.P. Chambliss, “Numerical experiments on dual matrix algorithms for function minimization”, Aero-Astronautics Rept. No. 89, Rice University (1972).
H.Y. Huang and J.P. Chambliss, “Quadratically convergent algorithms and one-dimensional search schemes”,Journal of Optimization Theory and Applications 11 (1973) 175–188.
G.P. McCormick and K. Ritter, “Methods of conjugate directions versus quasi-Newton methods”,Mathematical Programming 3 (1972) 101–116.
K. Ritter, “A method of conjugate directions for linearly constrained nonlinear programming problems”,SIAM Journal on Numerical Analysis, to appear.
D.J. Topkis and A.F. Veinott, “On the convergence of some feasible direction algorithms for nonlinear programming”,SIAMJournal on Control 5 (1967) 268–279.
M.M. Vainberg,Variational methods for the study of nonlinear operators (Holden-Day, San Francisco, 1964).
G. Zoutendijk,Methods of feasible directions (Elsevier, Amsterdam, 1960).
IBM System/360 Scientific Subroutine Package, Version III, Subroutine RANDU, Program Number 360A-CM-03K, fifth edition, 1970.
Author information
Authors and Affiliations
Additional information
This work was supported by the National Research Council of Canada under Research Grant A8189.
Rights and permissions
About this article
Cite this article
Best, M.J. A method to accelerate the rate of convergence of a class of optimization algorithms. Mathematical Programming 9, 139–160 (1975). https://doi.org/10.1007/BF01681341
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF01681341