Abstract
In the literature, the proof of superlinear convergence of approximate Newton or SQP methods for solving nonlinear programming problems requires twice smoothness of the objective and constraint functions. Sometimes, the second-order derivatives of those functions are required to be Lipschitzian. In this paper, we present approximate Newton or SQP methods for solving nonlinear programming problems whose objective and constraint functions have locally Lipschitzian derivatives, and establishQ-superlinear convergence of these methods under the assumption that these derivatives are semismooth. This assumption is weaker than the second-order differentiability. The extended linear-quadratic programming problem in the fully quadratic case is an example of nonlinear programming problems whose objective functions have semismooth but not smooth derivatives.
Similar content being viewed by others
References
D.P. Bertsekas,Constrained Optimization and Lagrange Multiplier Methods (Academic Press, New York, 1982).
F.H. Clarke,Optimization and Nonsmooth Analysis (Wiley, New York, 1983).
J.E. Dennis and R.B. Schnabel,Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Prentice-Hall, Englewood Cliffs, N.J., 1983).
J.E. Dennis, S-B. Li and R.A. Tapia, “A unified approach to global convergence of trust-region methods for nonsmooth optimization”, TR89-5, Department of Mathematical Sciences, Rice University (Houston, Texas, Revised version, 1990).
U.C. Garcia Palomares and O.L. Mangasarian, “Superlinearly convergent quasi-Newton algorithms for nonlinearly constrained optimization problems”,Mathematical Programming 11 (1976) 1–13.
S.P. Han, “Superlinearly convergent variable metric algorithms for general nonlinear programming problems”,Mathematical Programming 11 (1976) 263–282.
J.B. Hiriart-Urruty, J.J. Strodoit and V.H. Nguyen, “Generalized Hessian matrix and second-order optimality conditions for problems withC 1,1 data”,Applied Mathematics and Optimization 11 (1984) 43–56.
G.P. McCormick, “Penalty function versus non-penalty function methods for constrained nonlinear programming problems”,Mathematical Programming 1 (1971) 217–238.
R. Mifflin, “Semismooth and semiconvex functions in constrained optimization”,SIAM Journal on Control and Optimization 15 (1977) 957–972.
J.M. Ortega and W.C. Rheinboldt,Iterative Solution of Nonlinear Equations in Several Variables (Academic Press, New York, 1970).
J.S. Pang, S.P. Han and R. Rangaraj, “Minimization of locally Lipschitzian functions”,SIAM Journal on Optimization 1 (1991) 57–82.
J.S. Pang and L. Qi, “Nonsmooth equations: motivation and algorithms”,SIAM Journal on Optimization. 3 (1993) 443–465.
E. Polak, D.Q. Mayne and Y. Wardi, “On the extension of constrained optimization algorithms from differentiable to nondifferentiable problems”,SIAM Journal on Control and Optimization 21 (1983) 179–203.
L. Qi, “Semismoothness and decomposition of maximal normal operators”,Journal of Mathematical Analysis and Applications 146 (1990) 271–279.
L. Qi, “Convergence analysis of some algorithms for solving nonsmooth equations”,Mathematics of Operations Research 18 (1993) 227–244.
L. Qi, “LC1 functions and LC1 optimization problems”, Applied Mathematics Preprint 91/21, School of Mathematics, The University of New South Wales (Sydney, Australia, 1991).
L. Qi and J. Sun, “A nonsmooth version of Newton's method”,Mathematical Programming. 58 (1993) 353–367.
L. Qi and R. Womersley, “An SQP algorithm for solving extended linear-quadratic problems in stochastic programming”, Applied Mathematics Preprint 92/23, School of Mathematics, The University of New South Wales (Sydney, Australia, 1992).
S.M. Robinson, “A quadratically convergent algorithm for general nonlinear programming problems”,Mathematical Programming 3 (1972) 145–156.
S.M. Robinson, “Perturbed Kuhn—Tucker points and rates of convergence for a class of nonlinear programming algorithms”,Mathematical Programming 7 (1974) 1–16.
R.T. Rockafellar, “Computational schemes for solving large-scale problems in extended linear-quadratic programming”,Mathematical Programming 48 (1990) 447–474.
R.T. Rockafellar and R.J.-B. Wets, “Generalized linear-quadratic problems of deterministic and stochastic optimal control in discrete time”,SIAM Journal on Control and Optimization 28 (1990) 810–822.
N.Z. Shor, “A class of almost-differentiable functions and a minimization method for functions of this class”,Kibernetika 4 (1972) 65–70.
J. Sun and L. Qi, “An interior point algorithm of o(\(\left( {\sqrt m \left| {ln \in } \right|} \right)\)) iterations forC 1-convex programming”,Mathematical Programming 57 (1992) 239–257.
C. Zhu and R.T. Rockafellar, “Primal-dual projected gradient algorithms for extended linear-quadratic programming”,SIAM Journal on Optimization. 3 (1993) 751–783.
Author information
Authors and Affiliations
Additional information
This work is supported by the Australian Research Council.
This paper is dedicated to Professor O.L. Mangasarian on the occasion of his 60th birthday.
Rights and permissions
About this article
Cite this article
Qi, L. Superlinearly convergent approximate Newton methods for LC1 optimization problems. Mathematical Programming 64, 277–294 (1994). https://doi.org/10.1007/BF01582577
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF01582577