Skip to main content
Log in

On combining feasibility, descent and superlinear convergence in inequality constrained optimization

  • Published:
Mathematical Programming Submit manuscript

Abstract

Extension of quasi-Newton techniques from unconstrained to constrained optimization via Sequential Quadratic Programming (SQP) presents several difficulties. Among these are the possible inconsistency, away from the solution, of first order approximations to the constraints, resulting in infeasibility of the quadratic programs; and the task of selecting a suitable merit function, to induce global convergence. In ths case of inequality constrained optimization, both of these difficulties disappear if the algorithm is forced to generate iterates that all satisfy the constraints, and that yield monotonically decreasing objective function values. (Feasibility of the successive iterates is in fact required in many contexts such as in real-time applications or when the objective function is not well defined outside the feasible set.) It has been recently shown that this can be achieved while preserving local two-step superlinear convergence. In this note, the essential ingredients for an SQP-based method exhibiting the desired properties are highlighted. Correspondingly, a class of such algorithms is described and analyzed. Tests performed with an efficient implementation are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. “Harwell subroutine library,”Library Reference Manual (Harwell, England, 1985).

  2. P.T. Boggs and J.W. Tolle, “A strategy for global convergence in a sequential quadratic programming algorithm,”SIAM Journal on Numerical Analysis 26 (1989) 600–623.

    Google Scholar 

  3. G. DiPillo and L. Grippo, “A continuously differentiable exact penalty function for nonlinear programming with inequality constraints,”SIAM Journal on Control and Optimization 23 (1985) 72–84.

    Google Scholar 

  4. M.K.H. Fan, L.-S. Wang, J. Koninckx and A.L. Tits, “Software package for optimization-based design with user-supplied simulators,”IEEE Control System Magazine 9 (1989).

  5. R. Fletcher, “Numerical experiments with an exactL 1 penalty function method,” in: O.L. Mangasarian, R.R. Meyer and S.M. Robinson, eds.,Nonlinear Programming 4 (Academic Press, New York, 1981) pp. 99–129.

    Google Scholar 

  6. P.E. Gill, W. Murray, M.A. Saunders and M.H. Wright, “User's guide for QPSOL (version 3.2): A Fortran package for quadratic programming,” Technical Report SOL 84-6, Systems Optimization Laboratory, Stanford University (Stanford, CA, 1984).

    Google Scholar 

  7. S.P. Han, “A globally convergent method for nonlinear programming,”Journal of Optimization Theory and Applications 22 (1977) 297–309.

    Google Scholar 

  8. W. Hock and K. Schittkowski,Test examples for nonlinear programming codes, Lecture Notes in Economics and Mathematical Systems No. 187 (Springer, Berlin, 1981).

    Google Scholar 

  9. D.Q. Mayne and E. Polak, “A superlinearly convergent algorithm for constrained optimization problems,”Mathematical Programming Studies 16 (1982) 45–61.

    Google Scholar 

  10. E.R. Panier and A.L. Tits, “A superlinearly convergent feasible method for the solution of inequality constrained optimization problems,”SIAM Journal on Control and Optimization 25 (1987) 934–950.

    Google Scholar 

  11. E.R. Panier, A.L. Tits and J.N. Herskovits, “A QP-free, globally convergent locally superlinearly convergent algorithm for inequality constrained optimization,”SIAM Journal on Control and Optimization 26 (1988) 788–811.

    Google Scholar 

  12. M.J.D. Powell, “A fast algorithm for nonlinearly constrained optimization calculations,” in: G.A. Watson, ed.,Numerical Analysis, Dundee, 1977, Lecture Notes in Mathematics No. 630 (Springer, Berlin, 1978) pp. 144–157.

    Google Scholar 

  13. M.J.D. Powell, “The convergence of variable metric methods for nonlinearly constrained optimization calculations,” in: O.L. Mangasarian, R.R. Meyer and S.M. Robinson, eds.,Nonlinear Programming 3 (Academic Press, New York, 1978) pp. 27–63.

    Google Scholar 

  14. M.J.D. Powell and Y.X. Yuan, “A recursive quadratic programming algorithm that uses differentiable exact penalty functions,”Mathematical Programming 35 (1986) 265–278.

    Google Scholar 

  15. S.M. Robinson, “Perturbed Kuhn—Tucker points and rates of convergence for a class of nonlinear-programming algorithms,”Mathematical Programming 9 (1974) 1–16.

    Google Scholar 

  16. J. Stoer, “The convergence of sequential quadratic programming methods for solving nonlinear programs,” in: R.E. Kalman, G.I. Marchuk, A.E. Ruberti and A.J. Viterbi, eds.,Recent Advances in Communication and Control Theory (Optimization Software, New York, 1987) pp. 412–421.

    Google Scholar 

  17. J. Zhou and A.L. Tits, “User's guide for FSQP version 2.3. A Fortran code for solving optimization problems, possibly minimax, with general inequality constraints and linear equality constraints, generating feasible iterates,” SRC TR-90-60r1b, Systems Research Center, University of Maryland (College Park, MD, 1991).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

This research was supported in part by NSF's Engineering Research Centers Program No. NSFD-CDR-88-03012, and by NSF grants No. DMC-84-51515 and DMC-88-15996.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Panier, E.R., Tits, A.L. On combining feasibility, descent and superlinear convergence in inequality constrained optimization. Mathematical Programming 59, 261–276 (1993). https://doi.org/10.1007/BF01581247

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01581247

AMS (1980) Subject Classifications

Key words

Navigation