Abstract
We classify in this paper different augmented Lagrangian functions into three unified classes. Based on two unified formulations, we construct, respectively, two convergent augmented Lagrangian methods that do not require the global solvability of the Lagrangian relaxation and whose global convergence properties do not require the boundedness of the multiplier sequence and any constraint qualification. In particular, when the sequence of iteration points does not converge, we give a sufficient and necessary condition for the convergence of the objective value of the iteration points. We further derive two multiplier algorithms which require the same convergence condition and possess the same properties as the proposed convergent augmented Lagrangian methods. The existence of a global saddle point is crucial to guarantee the success of a dual search. We generalize in the second half of this paper the existence theorems for a global saddle point in the literature under the framework of the unified classes of augmented Lagrangian functions.
Similar content being viewed by others
References
Ben-Tal A., Zibuleysky M.: Penalty/barrier multiplier methods for convex programming problems. SIAM J. Optim. 7, 347–366 (1997)
Bertsekas D.P.: Constrained Optimization and Lagrangian Multiplier Methods. Academic Press, New York (1982)
Conn A.R., Gould I.M., Toint Ph.L.: A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bound. SIAM J. Numer. Anal. 28, 545–572 (1991)
Di Pillo G., Grippo L.: An exact penalty function method with global convergence properties for nonlinear programming problems. Math. Prog. 36, 1–18 (1986)
Di Pillio G., Grippo L.: Exact penalty functions in constrained optimization. SIAM J. Control Optim. 27, 1333–1360 (1989)
Di Pillio G., Lvcicd S.: An augmented Lagrangian function with improved exactness properties. SIAM J. Optim. 12, 376–406 (2001)
Goldfarb D., Polyak R., Scheinberg K., Yuzefovich I.: A modified barrier-augmented Lagrangian method for constrained minimization. Comp. Optim. Appl. 14, 55–74 (1999)
Griva I., Polyak R.: Primal-dual nonliner rescaling method with dynamic scaling parameter update. Math. Prog. Ser. A 106, 237–259 (2006)
Hartman J.K.: Iterative determination of parameters for an exact penalty function. J. Optim. Theory Appl. 16, 49–66 (1975)
Hestenes M.R.: Multiplier and gradient methods. J. Optim. Theory. Appl. 4, 303–320 (1969)
Huang X.X., Yang X.Q.: A unified augmented Lagrangian approach to duality and exact penalization. Math. Oper. Res. 28, 533–552 (2003)
Huang X.X., Yang X.Q.: Further study on augmented Lagrangian duality theory. J. Glob. Optim. 31, 193–210 (2005)
Kort, B.W., Bertsekas, D.P.: A new penalty method for constrained minimization. In: Proceddings of the 1972 IEEE Conference on Decision and Control, New Orleans, pp. 162–166 (1972)
Li D.: Zero duality gap for a class of nonconvex optimization problems. J. Optim. Theory Appl. 85, 309–324 (1995)
Li D.: Saddle-point generation on nonlinear nonconvex optimization. Nonlinear Anal. 30, 4339–4344 (1997)
Li D., Sun X.L.: Local convexification of the Lagrangian function in nonconvex optimization. J. Optim. Theory Appl. 104, 109–120 (2000)
Mangasarian O.L.: Unconstrained Lagrangians in nonlinear programming. SIAM J. Control 13, 772–791 (1975)
Nguyen V.H., Strodiot J.J.: On the convergence rate of a penalty function method of exponential type. J. Optim. Theory Appl. 27, 495–508 (1979)
Polyak R.: Modified barrier functions: Theory and methods. Math. Prog. 54, 177–222 (1992)
Polyak R.: Log-sigmoid multipliers method in constrained optimization. Ann. Oper. Res. 101, 427–460 (2001)
Polyak, R., Griva, I.: A Primal-dual nonlinear rescaling method with dynamic scaling parameter update, Technical report, SEOR Department, George Mason University, Fairfax, VA22030, Technical Report SEOR-11-02-2002
Polyak R., Griva I.: Nonlinear rescaling vs. smoothing technique in convex optimization. Math. Prog. 92, 197–235 (2002)
Polyak R., Griva I.: Primal-dual nonlinear rescaling method for convex optimization. J. Optim. Theory Appl. 122, 111–156 (2004)
Powell M.J.D.:: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (eds) Optimization, pp. 283–298. Academic Press, New York (1969)
Rockfellar R.T.: A dual approach to solving nonlinear programming problems by unconstrained optimization. Math. Prog. 5, 354–373 (1973)
Rockfellar R.T.: Augmented Lagrange multiplier functions and duality in nonconvex programming. SIAM J. Control 12, 268–285 (1974)
Rockfellar R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1, 97–116 (1976)
Rockfellar R.T.: Lagrange multipliers and optimality. SIAM Rev. 35, 183–238 (1993)
Rockfellar R.T., Wets R.J.B.: Variational Analysis. Spriger-Verlag, Berlin (1998)
Rubinov A.M., Huang X.X., Yang X.Q.: The zero duality gap property and lower semicontinuity of the perturbation function. Math. Oper. Res. 27, 775–791 (2002)
Rubinov A.M., Yang X.Q.: Lagrangian-type Functions in Constrained Non-convex Optimization. Kluwer, Massachusettes (2003)
Sun X.L., Li D., Mckinnon K.: On saddle points of augments Lagrangians for constrained nonconvex optimization. SIAM J. Optim. 15, 1128–1146 (2005)
Tseng P., Bertsekas D.P.: On the convergence of the exponential multiplier method for convex programming. Math. Prog. 60, 1–19 (1993)
Wright M.H.: III-conditioning and computational error in interior methods for nonlinear programming. SIAM J. Optim. 9, 84–111 (1998)
Xu Z.K.: Local saddle points and convexification for nonconvex optimization problems. J. Optim. Thoeory Appl. 94, 739–746 (1997)
Yang X.Q., Huang X.X.: A nonlinear Lagrangian approach to constrained optimization problems. SIAM J. Optim. 11, 1119–1149 (2001)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Wang, CY., Li, D. Unified theory of augmented Lagrangian methods for constrained global optimization. J Glob Optim 44, 433–458 (2009). https://doi.org/10.1007/s10898-008-9347-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10898-008-9347-1