Abstract
Many trust region algorithms for unconstrained minimization have excellent global convergence properties if their second derivative approximations are not too large [2]. We consider how large these approximations have to be, if they prevent convergence when the objective function is bounded below and continuously differentiable. Thus we obtain a useful convergence result in the case when there is a bound on the second derivative approximations that depends linearly on the iteration number.
Similar content being viewed by others
References
R. Fletcher,Practical methds of optimization, Volume 1: Unconstrained optimization (Wiley, Chichester, 1980).
M.J.D. Powell, “Convergence properties of a class of minimization algorithms”, in: O.L. Managasarian, R.R. Meyer and S.M. Robinson, eds.,Nonlinear Programming 2 (Academic Press, New York, 1975).
D.C. Sorensen, “An example concerning quasi-Newton estimation of a sparse Hessian”,SIGNUM Newsletter 16(2) (1981) 8–10.
D.C. Sorensen, “Trust region methods for unconstrained optimization” in: M.J.D. Powell ed.,Nonlinear Optimization 1981 (Academic Press, London, 1982).
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Powell, M.J.D. On the global convergence of trust region algorithms for unconstrained minimization. Mathematical Programming 29, 297–303 (1984). https://doi.org/10.1007/BF02591998
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF02591998