Skip to main content
Log in

An alternate implementation of Goldfarb's minimization algorithm

  • Published:
Mathematical Programming Submit manuscript

Abstract

Goldfarb's algorithm, which is one of the most successful methods for minimizing a function of several variables subject to linear constraints, uses a single matrix to keep second derivative information and to ensure that search directions satisfy any active constraints. In the original version of the algorithm this matrix is full, but by making a change of variables so that the active constraints become bounds on vector components, this matrix is transformed so that the dimension of its non-zero part is only the number of variablesless the number of active constraints. It is shown how this transformation may be used to give a version of the algorithm that usually provides a good saving in the amount of computation over the original version. Also it allows the use of sparse matrix techniques to take advantage of zeros in the matrix of linear constraints. Thus the method described can be regarded as an extension of linear programming to allow a non-linear objective function.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. R.H. Bartels, G.H. Golub and M.A. Saunders, “Numerical techniques in mathematical programming”, Computer Science Rept. 70-162, Stanford University (1970).

  2. R.H. Bartels, “A stabilization of the Simplex Method”,Numerische Mathematik 16 (1971) 414–434.

    Google Scholar 

  3. A.G. Buckley, “A revised implementation of Goldfarb's minimization algorithm”, T.P. Rept. 544, U.K. A.E.R.E., Harwell, England (1973).

    Google Scholar 

  4. A.R. Colville, “A comparative study on nonlinear programming codes”, Rept. 320-2949. IBM New York Science Center (1968).

  5. A.R. Curtis and J.K. Reid, “Fortran subroutines for the solution of sparse sets of linear equations”, Rept.R. 6844, U.K. A.E.R.E., Harwell, England (1971).

    Google Scholar 

  6. R. Fletcher and M.J.D. Powell, “A rapidly convergent descent method for minimization”,The Computer Journal (1963) 163–168.

  7. R. Fletcher, “Minimizing general functions subject to linear constraints”, in: F.A. Lootsma, ed.,Numerical methods for non-linear optimization (Academic Press, New York, 1971).

    Google Scholar 

  8. D. Goldfarb, “Extension of Davidon's variable metric algorithm to maximization under linear inequality and equality constraints”,SIAM Journal on Applied Mathematics 17 (1969) 739–764.

    Google Scholar 

  9. J. Kowalik and M.R. Osborne,Methods for unconstrained optimization problems (Elsevier, New York, 1968).

    Google Scholar 

  10. M.J.D. Powell, “Unconstrained minimization and extension for constraints”, Rept. T.P. 495, U.K. A.E.R.E., Harwell, England (1972).

    Google Scholar 

  11. J.K. Reid, Harwell Rept., U.K. A.E.R.E., to appear.

  12. J.B. Rosen, “The gradient projection method for nonlinear programming, Part I. Linear Constraints”,SIAM Journal on Applied Mathematics 8 (1960) 181–217.

    Google Scholar 

  13. P. Wolfe, “Methods of nonlinear programming”, in: J. Abadie, ed.,Nonlinear programming (North-Holland, Amsterdam, 1967).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Buckley, A. An alternate implementation of Goldfarb's minimization algorithm. Mathematical Programming 8, 207–231 (1975). https://doi.org/10.1007/BF01580443

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01580443

Keywords

Navigation