Skip to main content
Log in

Unified theory of augmented Lagrangian methods for constrained global optimization

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

We classify in this paper different augmented Lagrangian functions into three unified classes. Based on two unified formulations, we construct, respectively, two convergent augmented Lagrangian methods that do not require the global solvability of the Lagrangian relaxation and whose global convergence properties do not require the boundedness of the multiplier sequence and any constraint qualification. In particular, when the sequence of iteration points does not converge, we give a sufficient and necessary condition for the convergence of the objective value of the iteration points. We further derive two multiplier algorithms which require the same convergence condition and possess the same properties as the proposed convergent augmented Lagrangian methods. The existence of a global saddle point is crucial to guarantee the success of a dual search. We generalize in the second half of this paper the existence theorems for a global saddle point in the literature under the framework of the unified classes of augmented Lagrangian functions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Ben-Tal A., Zibuleysky M.: Penalty/barrier multiplier methods for convex programming problems. SIAM J. Optim. 7, 347–366 (1997)

    Article  Google Scholar 

  2. Bertsekas D.P.: Constrained Optimization and Lagrangian Multiplier Methods. Academic Press, New York (1982)

    Google Scholar 

  3. Conn A.R., Gould I.M., Toint Ph.L.: A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bound. SIAM J. Numer. Anal. 28, 545–572 (1991)

    Article  Google Scholar 

  4. Di Pillo G., Grippo L.: An exact penalty function method with global convergence properties for nonlinear programming problems. Math. Prog. 36, 1–18 (1986)

    Article  Google Scholar 

  5. Di Pillio G., Grippo L.: Exact penalty functions in constrained optimization. SIAM J. Control Optim. 27, 1333–1360 (1989)

    Article  Google Scholar 

  6. Di Pillio G., Lvcicd S.: An augmented Lagrangian function with improved exactness properties. SIAM J. Optim. 12, 376–406 (2001)

    Article  Google Scholar 

  7. Goldfarb D., Polyak R., Scheinberg K., Yuzefovich I.: A modified barrier-augmented Lagrangian method for constrained minimization. Comp. Optim. Appl. 14, 55–74 (1999)

    Article  Google Scholar 

  8. Griva I., Polyak R.: Primal-dual nonliner rescaling method with dynamic scaling parameter update. Math. Prog. Ser. A 106, 237–259 (2006)

    Article  Google Scholar 

  9. Hartman J.K.: Iterative determination of parameters for an exact penalty function. J. Optim. Theory Appl. 16, 49–66 (1975)

    Article  Google Scholar 

  10. Hestenes M.R.: Multiplier and gradient methods. J. Optim. Theory. Appl. 4, 303–320 (1969)

    Article  Google Scholar 

  11. Huang X.X., Yang X.Q.: A unified augmented Lagrangian approach to duality and exact penalization. Math. Oper. Res. 28, 533–552 (2003)

    Article  Google Scholar 

  12. Huang X.X., Yang X.Q.: Further study on augmented Lagrangian duality theory. J. Glob. Optim. 31, 193–210 (2005)

    Article  Google Scholar 

  13. Kort, B.W., Bertsekas, D.P.: A new penalty method for constrained minimization. In: Proceddings of the 1972 IEEE Conference on Decision and Control, New Orleans, pp. 162–166 (1972)

  14. Li D.: Zero duality gap for a class of nonconvex optimization problems. J. Optim. Theory Appl. 85, 309–324 (1995)

    Article  Google Scholar 

  15. Li D.: Saddle-point generation on nonlinear nonconvex optimization. Nonlinear Anal. 30, 4339–4344 (1997)

    Article  Google Scholar 

  16. Li D., Sun X.L.: Local convexification of the Lagrangian function in nonconvex optimization. J. Optim. Theory Appl. 104, 109–120 (2000)

    Article  Google Scholar 

  17. Mangasarian O.L.: Unconstrained Lagrangians in nonlinear programming. SIAM J. Control 13, 772–791 (1975)

    Article  Google Scholar 

  18. Nguyen V.H., Strodiot J.J.: On the convergence rate of a penalty function method of exponential type. J. Optim. Theory Appl. 27, 495–508 (1979)

    Article  Google Scholar 

  19. Polyak R.: Modified barrier functions: Theory and methods. Math. Prog. 54, 177–222 (1992)

    Article  Google Scholar 

  20. Polyak R.: Log-sigmoid multipliers method in constrained optimization. Ann. Oper. Res. 101, 427–460 (2001)

    Article  Google Scholar 

  21. Polyak, R., Griva, I.: A Primal-dual nonlinear rescaling method with dynamic scaling parameter update, Technical report, SEOR Department, George Mason University, Fairfax, VA22030, Technical Report SEOR-11-02-2002

  22. Polyak R., Griva I.: Nonlinear rescaling vs. smoothing technique in convex optimization. Math. Prog. 92, 197–235 (2002)

    Article  Google Scholar 

  23. Polyak R., Griva I.: Primal-dual nonlinear rescaling method for convex optimization. J. Optim. Theory Appl. 122, 111–156 (2004)

    Article  Google Scholar 

  24. Powell M.J.D.:: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (eds) Optimization, pp. 283–298. Academic Press, New York (1969)

    Google Scholar 

  25. Rockfellar R.T.: A dual approach to solving nonlinear programming problems by unconstrained optimization. Math. Prog. 5, 354–373 (1973)

    Article  Google Scholar 

  26. Rockfellar R.T.: Augmented Lagrange multiplier functions and duality in nonconvex programming. SIAM J. Control 12, 268–285 (1974)

    Article  Google Scholar 

  27. Rockfellar R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1, 97–116 (1976)

    Article  Google Scholar 

  28. Rockfellar R.T.: Lagrange multipliers and optimality. SIAM Rev. 35, 183–238 (1993)

    Article  Google Scholar 

  29. Rockfellar R.T., Wets R.J.B.: Variational Analysis. Spriger-Verlag, Berlin (1998)

    Google Scholar 

  30. Rubinov A.M., Huang X.X., Yang X.Q.: The zero duality gap property and lower semicontinuity of the perturbation function. Math. Oper. Res. 27, 775–791 (2002)

    Article  Google Scholar 

  31. Rubinov A.M., Yang X.Q.: Lagrangian-type Functions in Constrained Non-convex Optimization. Kluwer, Massachusettes (2003)

    Google Scholar 

  32. Sun X.L., Li D., Mckinnon K.: On saddle points of augments Lagrangians for constrained nonconvex optimization. SIAM J. Optim. 15, 1128–1146 (2005)

    Article  Google Scholar 

  33. Tseng P., Bertsekas D.P.: On the convergence of the exponential multiplier method for convex programming. Math. Prog. 60, 1–19 (1993)

    Article  Google Scholar 

  34. Wright M.H.: III-conditioning and computational error in interior methods for nonlinear programming. SIAM J. Optim. 9, 84–111 (1998)

    Article  Google Scholar 

  35. Xu Z.K.: Local saddle points and convexification for nonconvex optimization problems. J. Optim. Thoeory Appl. 94, 739–746 (1997)

    Article  Google Scholar 

  36. Yang X.Q., Huang X.X.: A nonlinear Lagrangian approach to constrained optimization problems. SIAM J. Optim. 11, 1119–1149 (2001)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Duan Li.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wang, CY., Li, D. Unified theory of augmented Lagrangian methods for constrained global optimization. J Glob Optim 44, 433–458 (2009). https://doi.org/10.1007/s10898-008-9347-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-008-9347-1

Keywords

Navigation