Skip to main content
Log in

On the cost of solving augmented Lagrangian subproblems

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

At each iteration of the augmented Lagrangian algorithm, a nonlinear subproblem is being solved. The number of inner iterations (of some/any method) needed to obtain a solution of the subproblem, or even a suitable approximate stationary point, is in principle unknown. In this paper we show that to compute an approximate stationary point sufficient to guarantee local superlinear convergence of the augmented Lagrangian iterations, it is enough to solve two quadratic programming problems (or two linear systems in the equality-constrained case). In other words, two inner Newtonian iterations are sufficient. To the best of our knowledge, such results are not available even under the strongest assumptions (of second-order sufficiency, strict complementarity, and the linear independence constraint qualification). Our analysis is performed under second-order sufficiency only, which is the weakest assumption for obtaining local convergence and rate of convergence of outer iterations of the augmented Lagrangian algorithm. The structure of the quadratic problems in question is related to the stabilized sequential quadratic programming and to second-order corrections.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L.: Augmented Lagrangian methods under the constant positive linear dependence constraint qualification. Math. Program. 111(1–2, Ser. B), 5–32 (2008)

    MathSciNet  MATH  Google Scholar 

  2. Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Computer Science and Applied Mathematics. Academic Press Inc, New York (1982)

    MATH  Google Scholar 

  3. Birgin, E.G., Martínez, J.M.: Improving ultimate convergence of an augmented Lagrangian method. Optim. Methods Softw. 23(2), 177–195 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  4. Birgin, E.G., Martínez, J.M.: Practical Augmented Lagrangian Methods for Constrained Optimization, Fundamentals of Algorithms, vol. 10. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (2014)

    Book  MATH  Google Scholar 

  5. Daryina, A.N., Izmailov, A.F., Solodov, M.V.: A class of active-set Newton methods for mixed complementarity problems. SIAM J. Optim. 15(2), 409–429 (2004/2005)

  6. Facchinei, F., Fischer, A., Kanzow, C.: On the accurate identification of active constraints. SIAM J. Optim. 9(1), 14–32 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  7. Fernández, D., Pilotta, E.A., Torres, G.A.: An inexact restoration strategy for the globalization of the sSQP method. Comput. Optim. Appl. 54(3), 595–617 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  8. Fernández, D., Solodov, M.: Stabilized sequential quadratic programming for optimization and a stabilized Newton-type method for variational problems. Math. Program. 125(1, Ser. A), 47–73 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  9. Fernández, D., Solodov, M.V.: Local convergence of exact and inexact augmented Lagrangian methods under the second-order sufficient optimality condition. SIAM J. Optim. 22(2), 384–407 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  10. Fischer, A.: Local behavior of an iterative framework for generalized equations with nonisolated solutions. Math. Program. 94(1, Ser. A), 91–124 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  11. Gill, P.E., Kungurtsev, V., Robinson, D.P.: A stabilized SQP method: global convergence. IMA J. Numer. Anal. 37(1), 407–443 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  12. Gill, P.E., Kungurtsev, V., Robinson, D.P.: A stabilized SQP method: superlinear convergence. Math. Program. 163(1–2, Ser. A), 369–410 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  13. Hager, W.W.: Stabilized sequential quadratic programming. Comput. Optim. Appl. 12(1–3), 253–273 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  14. Hager, W.W., Gowda, M.S.: Stability in the presence of degeneracy and error estimation. Math. Program. 85(1, Ser. A), 181–192 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  15. Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4, 303–320 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  16. Izmailov, A.F., Kurennoy, A.S., Solodov, M.V.: A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems. Math. Program. 142(1–2, Ser. A), 591–604 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Izmailov, A.F., Kurennoy, A.S., Solodov, M.V.: Local convergence of the method of multipliers for variational and optimization problems under the noncriticality assumption. Comput. Optim. Appl. 60(1), 111–140 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  18. Izmailov, A.F., Solodov, M.V.: Newton-Type Methods for Optimization and Variational Problems. Springer Series in Operations Research and Financial Engineering. Springer, Cham (2014)

    Google Scholar 

  19. Izmailov, A.F., Solodov, M.V., Uskov, E.I.: Combining stabilized SQP with the augmented Lagrangian algorithm. Comput. Optim. Appl. 62(2), 405–429 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  20. Izmailov, A.F., Solodov, M.V., Uskov, E.I.: Globalizing stabilized sequential quadratic programming method by smooth primal–dual exact penalty function. J. Optim. Theory Appl. 169(1), 148–178 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  21. Powell, M.J.D.: A method for nonlinear constraints in minimization problems. In: Optimization (Sympososium, University of Keele, Keele, 1968), pp. 283–298. Academic Press, London (1969)

  22. Rockafellar, R.T.: A dual approach to solving nonlinear programming problems by unconstrained optimization. Math. Program. 5, 354–373 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  23. Wright, S.J.: An algorithm for degenerate nonlinear programming with rapid local convergence. SIAM J. Optim. 15(3), 673–696 (2005)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We thank the three anonymous referees for their evaluations of the paper, which led to an improved version. We are specially grateful to the referee who pointed out one technical issue that required a correction.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mikhail Solodov.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Research of the second author is supported in part by CNPq Grant 303724/2015-3, by FAPERJ Grant 203.052/2016, and by Russian Foundation for basic research Grant 19-51-12003 NNIOa.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fernández, D., Solodov, M. On the cost of solving augmented Lagrangian subproblems. Math. Program. 182, 37–55 (2020). https://doi.org/10.1007/s10107-019-01384-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-019-01384-1

Keywords

Mathematics Subject Classification

Navigation