Abstract
A Newton-like method is presented for minimizing a function ofn variables. It uses only function and gradient values and is a variant of the discrete Newton algorithm. This variant requires fewer operations than the standard method whenn > 39, and storage is proportional ton rather thann 2.
Similar content being viewed by others
References
W. Murray,Numerical methods for unconstrained optimization (Academic Press, New York, 1972).
D.P. O'Leary, “A discrete Newton algorithm for minimizing a function of many variables”, Computer Science Department Report TR-910, University of Maryland (June 1980).
R.P. Brent, “Some efficient algorithms for solving systems of nonlinear equations”,SIAM Journal on Numerical Analysis 10 (1973) 327–344.
A.R. Curtis, M.J.D. Powell and J.K. Reid, “On the estimation of sparse Jacobian matrices”,Journal of the Institute of Mathematics and its Applications 13 (1974) 117–119.
M.J.D. Powell and Ph.L. Toint, “On the estimation of sparse Hessian matrices”,SIAM Journal on Numerical Analysis 16 (1979) 1060–1074.
M.C. Bartholomew-Biggs, “A matrix modification method for calculating approximate solutions to systems of linear equations”,Journal of the Institute of Mathematics and its Applications 23 (1979) 131–137.
P.E. Gill and W. Murray, “Newton-type methods for unconstrained and linearly constrained optimization”,Mathematical Programming 7 (1974) 311–350.
S. Kaniel and A. Dax, “A modified Newton's method for unconstrained minimization”,SIAM Journal on Numerical Analysis 16 (1979) 324–331.
P.E. Gill and W. Murray, “Newton-type methods for linearly constrained optimization”, in: P.E. Gill and W. Murray, eds.,Numerical methods for constrained optimization (Academic Press, New York, 1974) pp. 29–66.
A.A. Goldstein, “On steepest descent”,SIAM Journal on Control 3 (1965) 147–151.
M.R. Hestenes and E. Stiefel, “Methods of conjugate gradients for solving linear systems”,Journal of Research of the National Bureau of Standards 49 (1952) 409–436.
C. Lanczos, “An iteration method for the solution of the eigenvalue problem of linear differential and integral operators”,Journal of Research of the National Bureau of Standards 45 (1950) 255–282.
C. Lanczos, “Solution of systems of linear equations by minimized iterations”,Journal of Research of the National Bureau of Standards 49 (1952) 33–53.
Rati Chandra, “Conjugate gradient methods for partial differential equations”, Ph.D. Thesis, Report 129, Department of Computer Science, Yale University (1978).
C.C. Paige and M.A. Saunders, “Solutions of sparse indefinite systems of linear equations”,SIAM Journal on Numerical Analysis 12 (1975) 617–629.
J.R. Bunch and B.N. Parlett, “Direct methods for solving symmetric indefinite systems of linear equations”,SIAM Journal on Numerical Analysis 8 (1971) 639–655.
J.W. Daniel, “The conjugate gradient method for linear and nonlinear operator equations”,SIAM Journal on Numerical Analysis 4 (1967) 10–26.
S. Kaniel, “Estimates for some computational techniques in linear algebra”,Mathematics of Computation 20 (1966) 369–378.
O. Axelsson, “Solution of linear systems of equations: Iterative methods”, in: V.A. Barker, ed.,Sparse matrix techniques (Springer-Verlag, New York, 1977).
P. Concus, G.H. Golub and D.P. O'Leary, “A generalized conjugate gradient method for the numerical solution of elliptic partial differential equations”, in: J.R. Bunch and D.J. Rose, eds.,Sparse matrix computations (Academic Press, New York, 1976) pp. 309–332.
M.R. Hestenes,Conjugate direction methods in optimization (Springer-Verlag, New York, 1980).
R.F. Dennemeyer and E.H. Mookini, “CGS algorithms for unconstrained minimization of functions”,Journal of Optimization Theory and Applications 16 (1975) 67–85.
J.M. Ortega and W.C. Rheinboldt,Iterative solution of nonlinear equations in several variables (Academic Press, New York, 1970).
J.C.P. Bus, “Convergence of Newton-like methods for solving systems of nonlinear equations”,Numerische Mathematik 27 (1977) 271–281.
P.E. Gill and W. Murray, “Conjugate gradient methods for large-scale nonlinear optimization”, Systems Optimization Lab Report SOL-79-15, Department of Operations Research, Stanford University (1979).
M.R. Osborne, “An efficient weak line search with guaranteed termination”, Report 1870, Mathematics Research Center, University of Wisconsin (1978).
N.K. Garg and R.A. Tapia, “QDN: A variable storage algorithm for unconstrained optimization”, Department of Mathematical Sciences Report, Rice University (1980).
R.S. Dembo and T. Steihaug, “Truncated-Newton algorithms for large-scale unconstrained optimization”, School of Organization and Management, Yale University Preliminary Draft (September 1980).
P.E. Gill, W. Murray and S.G. Nash, “Newton-type minimization using the linear conjugate gradient method”, Draft, Department of Operations Research, Stanford University (October 1980).
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
O'Leary, D.P. A discrete Newton algorithm for minimizing a function of many variables. Mathematical Programming 23, 20–33 (1982). https://doi.org/10.1007/BF01583777
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF01583777