Abstract
The recently proposed quasi-Newton method for constrained optimization has very attractive local convergence properties. To force global convergnce of the method, a descent method which uses Zangwill's penalty function and an exact line search has been proposed by Han. In this paper a new method which adopts a differentiable penalty function and an approximate line is presented. The proposed penalty function has the form of the augmented Lagrangian function. An algorithm for updating parameters which appear in the penalty function is described. Global convergence of the given method is proved.
Similar content being viewed by others
References
E.M.L. Beale, “Numerical methods”, in: J. Abadie, ed.,Nonlinear programming (North-Holland, Amsterdam, 1967) pp. 133–205.
M.C. Biggs, “Constrained minimization using recursive equality quadratic programming”, in: F.A. Lootsma, ed.,Numerical methods for nonlinear optimization (Academic Press, London, 1972) pp. 411–428.
M.C. Biggs, “Constrained minimization using recursive quadratic programming: Some alternative subproblem formulations”, in: L.C.W. Dixon and G.P. Szegö, eds.,Towards global optimization (North-Holland, Amsterdam, 1975) pp. 341–349.
U.M. Garcia-Palomares and O.L. Mangasarian, “Superlinearly convergent quasi-Newton algorithms for nonlinearly constrained optimization problems”,Mathematical Programming 11 (1976) 1–13.
S.P. Han, “Superlinearly convergent variable metric algorithms for general nonlinear programming problems”,Mathematical Programming 11 (1976) 263–282.
S.P. Han, “Dual variable metric algorithms for constrained optimization”,SIAM Journal on Control and Optimization 15 (1977) 546–565.
S.P. Han, “A globally convergent method for nonlinear programming”,Journal of Optimization Theory and its Applications 22 (1977) 297–309.
J.M. Ortega and W.C. Rheinbolt,Iterative solution of nonlinear equations in several variables (Academic Press, New York, 1970).
M.J.D. Powell, “Algorithms for nonlinear constraints that use Lagrangian functions”,Mathematical Programming 14 (1978) 224–248.
M.J.D. Powell, “A fast algorithm for nonlinearly constrained optimization calculations”, in: G.A. Watson, ed.,Numerical Analysis (Springer-Verlag, Berlin, 1978) pp. 144–157.
M.J.D. Powell, “The convergence of variable metric methods for nonlinearly constrained optimization calculations”, in: O.L. Mangasarian, R.R. Meyer and S.M. Robinson, eds.,Nonlinear programming 3 (Academic Press, New York, 1978) pp. 27–63.
S.M. Robinson, “A quadratically convergent algorithm for general nonlinear programming problems”,Mathematical Programming 3 (1972) 145–156.
R.T. Rockafellar, “A dual approach to solving nonlinear programming problems by unconstrained optimization”,Mathematical Programming 5 (1973) 354–373.
R.B. Wilson, “A simplicial method for convex programming”, Ph.D. thesis, Harvard University (Cambridge, MA, 1963).
H. Yamashita, “A globally convergent quasi-Newton method for constrained optimization that does not use a penalty function”, Research Report, Ono Systems (Tokyo, 1979).
W.I. Zangwill, “Non-linear programming via penalty functions”,Management Science 13 (1967) 344–358.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Yamashita, H. A globally convergent constrained quasi-Newton method with an augmented lagrangian type penalty function. Mathematical Programming 23, 75–86 (1982). https://doi.org/10.1007/BF01583780
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF01583780