Skip to main content
Log in

On the best achievable quality of limit points of augmented Lagrangian schemes

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

A Correction to this article was published on 20 December 2021

This article has been updated

Abstract

The optimization literature is vast in papers dealing with improvements on the global convergence of augmented Lagrangian schemes. Usually, the results are based on weak constraint qualifications, or, more recently, on sequential optimality conditions obtained via penalization techniques. In this paper, we propose a somewhat different approach, in the sense that the algorithm itself is used in order to formulate a new optimality condition satisfied by its feasible limit points. With this tool at hand, we present several new properties and insights on limit points of augmented Lagrangian schemes, in particular, characterizing the strongest possible global convergence result for the safeguarded augmented Lagrangian method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Change history

References

  1. Abadie, J.: On the Kuhn-Tucker Theorem. In: Abadie, J. (ed.) Programming, Nonlinear, pp 21–36. Wiley, New York (1967)

  2. Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L.: On augmented Lagrangian methods with general lower-level constraints. SIAM J. Optim. 18(4), 1286–1309 (2007). https://doi.org/10.1137/060654797

    Article  MathSciNet  Google Scholar 

  3. Andreani, R., Cárdenas, A.R.V., Ramos, A., Ribeiro, A., Secchin, L.D.: On the convergence of augmented Lagrangian strategies for nonlinear programming. To appear in IMA Journal of Numerical Analysis (2020)

  4. Andreani, R., Fazzio, N., Schuverdt, M., Secchin, L.: A sequential optimality condition related to the quasi-normality constraint qualification and its algorithmic consequences. SIAM J. Optim. 29(1), 743–766 (2019). https://doi.org/10.1137/17M1147330

    Article  MathSciNet  Google Scholar 

  5. Andreani, R., Fukuda, E.H., Haeser, G., Santos, D.O., Secchin, L.D.: Optimality conditions for nonlinear second-order cone programming and symmetric cone programming. Tech. rep. http://www.optimization-online.org/DB_HTML/2019/10/7436.html (2019)

  6. Andreani, R., Gómez, W., Haeser, G., Mito, L.M., Ramos, A.: On optimality conditions for nonlinear conic programming. Tech. rep. http://www.optimization-online.org/DB_HTML/2020/03/7660.html (2020)

  7. Andreani, R., Haeser, G., Martínez, J.M.: On sequential optimality conditions for smooth constrained optimization. Optimization 60(5), 627–641 (2011). https://doi.org/10.1080/02331930903578700

    Article  MathSciNet  Google Scholar 

  8. Andreani, R., Haeser, G., Ramos, A., Silva, P.J.S.: A second-order sequential optimality condition associated to the convergence of optimization algorithms. IMA J. Numer. Anal. 37(4), 1902–1929 (2017). https://doi.org/10.1093/imanum/drw064

    Article  MathSciNet  Google Scholar 

  9. Andreani, R., Haeser, G., Schuverdt, M.L., Secchin, L.D., Silva, P.J.S.: On scaled stopping criteria for a safeguarded augmented Lagrangian method with theoretical guarantees. To appear in Mathematical Programming Computation. https://doi.org/10.1007/s12532-021-00207-9 (2021)

  10. Andreani, R., Haeser, G., Schuverdt, M.L., Silva, P.J.S.: A relaxed constant positive linear dependence constraint qualification and applications. Math. Program. 135(1), 255–273 (2012). https://doi.org/10.1007/s10107-011-0456-0

    Article  MathSciNet  Google Scholar 

  11. Andreani, R., Haeser, G., Schuverdt, M.L., Silva, P.J.S.: Two new weak constraint qualifications and applications. SIAM J. Optim. 22(3), 1109–1135 (2012). https://doi.org/10.1137/110843939

    Article  MathSciNet  Google Scholar 

  12. Andreani, R., Haeser, G., Secchin, L.D., Silva, P.J.S.: New sequential optimality conditions for mathematical programs with complementarity constraints and algorithmic consequences. SIAM J. Optim. 29(4), 3201–3230 (2019). https://doi.org/10.1137/18M121040X

    Article  MathSciNet  Google Scholar 

  13. Andreani, R., Haeser, G., Viana, D.S.: Optimality conditions and global convergence for nonlinear semidefinite programming. Mathematical Programming. https://doi.org/10.1007/s10107-018-1354-5 (2018)

  14. Andreani, R., Martínez, J.M., Santos, L.T.: Newton’s method may fail to recognize proximity to optimal points in constrained optimization. Math. Program. 160(1), 547–555 (2016). https://doi.org/10.1007/s10107-016-0994-6

    Article  MathSciNet  Google Scholar 

  15. Andreani, R., Martínez, J.M., Ramos, A., Silva, P.J.S.: A cone-continuity constraint qualification and algorithmic consequences. SIAM J. Optim. 26(1), 96–110 (2016). https://doi.org/10.1137/15M1008488

    Article  MathSciNet  Google Scholar 

  16. Andreani, R., Martínez, J.M., Ramos, A., Silva, P.J.S.: Strict constraint qualifications and sequential optimality conditions for constrained optimization. Math. Oper. Res. 43(3), 693–717 (2018). https://doi.org/10.1287/moor.2017.0879

    Article  MathSciNet  Google Scholar 

  17. Andreani, R., Martínez, J.M., Svaiter, B.F.: A new sequential optimality condition for constrained optimization and algorithmic consequences. SIAM J. Optim. 20(6), 3533–3554 (2010). https://doi.org/10.1137/090777189

    Article  MathSciNet  Google Scholar 

  18. Arora, J.S., Chahande, A.I., Paeng, J.K.: Multiplier methods for engineering optimization. Int. J. Numer. Methods Eng. 32(7), 1485–1525 (1991). https://doi.org/10.1002/nme.1620320706

    Article  MathSciNet  Google Scholar 

  19. Bertsekas, D.P., Ozdaglar, A.E.: Pseudonormality and a Lagrange multiplier theory for constrained optimization. Journal of Optimization Theory and Applications 114(2), 287–343 (2002). https://doi.org/10.1023/A:1016083601322

    Article  MathSciNet  Google Scholar 

  20. Birgin, E.G., Castillo, R.A., MartÍnez, J.M.: Numerical comparison of augmented Lagrangian algorithms for nonconvex problems. Comput. Optim. Appl. 31(1), 31–55 (2005). https://doi.org/10.1007/s10589-005-1066-7

    Article  MathSciNet  Google Scholar 

  21. Birgin, E.G., Martínez, J.M.: Augmented Lagrangian method with nonmonotone penalty parameters for constrained optimization. Comput. Optim. Appl. 51(3), 941–965 (2012). https://doi.org/10.1007/s10589-011-9396-0

    Article  MathSciNet  Google Scholar 

  22. Birgin, E.G., Martínez, J.M.: Practical augmented lagrangian methods for constrained optimization. Society for Industrial and Applied Mathematics, Philadelphia, PA. https://doi.org/10.1137/1.9781611973365 (2014)

  23. Bueno, L., Haeser, G., Lara, F., Rojas, F.: An augmented Lagrangian method for quasi-equilibrium problems. Comput. Optim. Appl. 76, 737–766 (2020). https://doi.org/10.1007/s10589-020-00180-4

    Article  MathSciNet  Google Scholar 

  24. Bueno, L., Haeser, G., Rojas, F.: Optimality conditions and constraint qualifications for generalized Nash equilibrium problems and their practical implications. SIAM J. Optim. 29 (1), 31–54 (2019). https://doi.org/10.1137/17M1162524

    Article  MathSciNet  Google Scholar 

  25. Curtis, F.E., Jiang, H., Robinson, D.P.: An adaptive augmented Lagrangian method for large-scale constrained optimization. Math. Program. 152, 201–245 (2015). https://doi.org/10.1007/s10107-014-0784-y

    Article  MathSciNet  Google Scholar 

  26. Feng, M., Li, S.: An approximate strong KKT condition for multiobjective optimization. TOP 26(3), 489–509 (2018). https://doi.org/10.1007/s11750-018-0491-6

    Article  MathSciNet  Google Scholar 

  27. Fernandez, D., Solodov, M.V.: Local convergence of exact and inexact augmented Lagrangian methods under the second-order sufficient optimality condition. SIAM J. Optim. 22(2), 384–407 (2012). https://doi.org/10.1137/10081085X

    Article  MathSciNet  Google Scholar 

  28. Gill, P.E., Kungurtsev, V., Robinson, D.P.: A shifted primal-dual penalty-barrier method for nonlinear optimization. SIAM J. Optim. 30 (2), 1067–1093 (2020). https://doi.org/10.1137/19M1247425

    Article  MathSciNet  Google Scholar 

  29. Guignard, M.: Generalized kuhn-Tucker conditions for mathematical programming problems in a Banach space. SIAM Journal on Control 7(2), 232–241 (1969). https://doi.org/10.1137/0307016

    Article  MathSciNet  Google Scholar 

  30. Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4(5), 303–320 (1969). https://doi.org/10.1007/BF00927673

    Article  MathSciNet  Google Scholar 

  31. Hestenes, M.R.: Optimization Theory: the Finite Dimensional Case. Wiley, New York (1975)

    MATH  Google Scholar 

  32. Izmailov, A.F., Solodov, M.V., Uskov, E.I.: Global convergence of augmented Lagrangian methods applied to optimization problems with degenerate constraints, including problems with complementarity constraints. SIAM J. Optim. 22(4), 1579–1606 (2012). https://doi.org/10.1137/120868359

    Article  MathSciNet  Google Scholar 

  33. Janin, R.: Directional derivative of the marginal function in nonlinear programming. In: Fiacco, A.V. (ed.) Sensitivity, Stability and Parametric Analysis, Mathematical Programming Studies. https://doi.org/10.1007/BFb0121214, vol. 21, pp 110–126. Springer, Berlin (1984)

  34. Kanzow, C., Raharja, A.B., Schwartz, A.: Sequential optimality conditions for cardinality-constrained optimization problems with applications. Tech. rep. https://www.mathematik.uni-wuerzburg.de/fileadmin/10040700/paper/CC-AM-Stat-feb-12.pdf (2021)

  35. Kanzow, C., Schwartz, A.: The price of inexactness: convergence properties of relaxation methods for mathematical programs with complementarity constraints revisited. Math. Oper. Res. 40(2), 253–275 (2015). https://doi.org/10.1287/moor.2014.0667

    Article  MathSciNet  Google Scholar 

  36. Kanzow, C., Steck, D.: An example comparing the standard and safeguarded augmented Lagrangian methods. Oper. Res. Lett. 45(6), 598–603 (2017). https://doi.org/10.1016/j.orl.2017.09.005

    Article  MathSciNet  Google Scholar 

  37. Mangasarian, O.L., Fromovitz, S.: The Fritz John necessary optimality conditions in the presence of equality and inequality constraints. J. Math. Anal. Appl. 17(1), 37–47 (1967). https://doi.org/10.1016/0022-247X(67)90163-1

    Article  MathSciNet  Google Scholar 

  38. Martínez, J.M.: Otimização prática usando o lagrangiano aumentado (in portuguese). Tech. rep. https://www.ime.unicamp.br/martinez/opusculo.html (2006)

  39. Martínez, J.M., Pilotta, E.A.: Inexact-restoration algorithm for constrained optimization. Journal of Optimization Theory and Applications 104(1), 135–163 (2000). https://doi.org/10.1023/A:1004632923654

    Article  MathSciNet  Google Scholar 

  40. Martínez, J.M., Svaiter, B.F.: A practical optimality condition without constraint qualifications for nonlinear programming. Journal of Optimization Theory and Applications 118(1), 117–133 (2003). https://doi.org/10.1023/A:1024791525441

    Article  MathSciNet  Google Scholar 

  41. Minchenko, L., Stakhovski, S.: On relaxed constant rank regularity condition in mathematical programming. Optimization 60(4), 429–440 (2011). https://doi.org/10.1080/02331930902971377

    Article  MathSciNet  Google Scholar 

  42. Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, Berlin (2006)

    MATH  Google Scholar 

  43. Powell, M.J.D.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed.) Optimization, pp 283–298. Academic Press, New York (1969)

  44. Qi, L., Wei, Z.: On the constant positive linear dependence condition and its application to SQP methods. SIAM J. Optim. 10(4), 963–981 (2000). https://doi.org/10.1137/S1052623497326629

    Article  MathSciNet  Google Scholar 

  45. Ramos, A.: Mathematical programs with equilibrium constraints: a sequential optimality condition, new constraint qualifications and algorithmic consequences. Optimization Methods and Software, pp. 1–37. https://doi.org/10.1080/10556788.2019.1702661 (2019)

  46. Rockafellar, R.T.: The multiplier method of Hestenes and Powell applied to convex programming. Journal on Optimization Theory and Applications 12, 555–562 (1973). https://doi.org/10.1007/BF00934777

    Article  MathSciNet  Google Scholar 

  47. Rockafellar, R.T.: Augmented Lagrange multiplier functions and duality in nonconvex programming. SIAM Journal on Control 12(2), 268–285 (1974). https://doi.org/10.1137/0312021

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We would like to thank the reviewers of this paper for their many insightful suggestions.

Funding

This work has been partially supported by CEPID-CeMEAI (FAPESP 2013/07375-0), FAPES (grant 116/2019), FAPESP (grants 2018/24293-0, 2017/18308-2 and 2017/17840-2), CNPq (grants 301888/2017-5, 303427/2018-3, 404656/2018-8, 438185/2018-8 and 307270/2019-0), and PRONEX - CNPq/FAPERJ (grant E-26/010.001247/2016).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roberto Andreani.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Andreani, R., Haeser, G., Mito, L.M. et al. On the best achievable quality of limit points of augmented Lagrangian schemes. Numer Algor 90, 851–877 (2022). https://doi.org/10.1007/s11075-021-01212-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-021-01212-8

Keywords

Mathematics Subject Classification (2010)

Navigation