Abstract
The augmented Lagrangian method is a classical solution method for nonlinear optimization problems. At each iteration, it minimizes an augmented Lagrangian function that consists of the constraint functions and the corresponding Lagrange multipliers. If the Lagrange multipliers in the augmented Lagrangian function are close to the exact Lagrange multipliers at an optimal solution, the method converges steadily. Since the conventional augmented Lagrangian method uses inaccurate estimated Lagrange multipliers, it sometimes converges slowly. In this paper, we propose a novel augmented Lagrangian method that allows the augmented Lagrangian function and its minimization problem to have variable constraints at each iteration. This allowance enables the new method to get more accurate estimated Lagrange multipliers by exploiting Karush–Kuhn–Tucker points of the subproblems and consequently to converge more efficiently and steadily.
Similar content being viewed by others
References
Beck, A., Teboulle, M.: Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett. 31, 167–175 (2003)
Tseng, P.: Approximation accuracy gradient methods, and error bound for structured convex optimization. Math. Program. 125, 263–295 (2010)
Bartlett, P., Collins, M., Taskar, B., McAllester, D.: Exponentiated gradient algorithms for large-margin structured classification. In: Advances in Neural Information Processing Systems, vol. 17, pp. 113–120 (2004)
Beck, A., Teboulle, M.: A fast iterative shrinkage–thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)
Bertsekas, D.: Nonlinear Programming. Athena Scientific, Athena (1999)
Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4, 303–320 (1969)
Powell, M.J.D.: A Method for Nonlinear Constraints in Minimization Problems. Academic, New York (1969)
Liuzzi, G., Lucidi, S., Piccialli, V.: Exploiting derivative-free local searches in DIRECT-type algorithms for global optimization. Comput. Optim. Appl. 65, 449–475 (2016)
Burges, C.J.C.: A tutorial on support vector machines for pattern recognition. Knowl. Discov. Data Min. 2, 121–167 (1998)
Cristianini, N., Shawe-Taylor, J.: Support Vector Machines and Other Kernel-based Learning Methods. Cambridge University Press, Cambridge (2000)
Sun, M., Aronson, J., Mckeown, P., Drinka, M.: A tabu search heuristic procedure for the fixed charge transportation problem. Eur. J. Oper. Res. 106, 441–456 (1998)
Conn, A.R., Gould, N., Sartenaer, A., Toint, P.H.L.: Convergence properties of an augmented Lagrangian algorithm for optimization with a combination of general equality and linear constraints. SIAM J. Optim. 6, 674–703 (1996)
Ben-Tal, A., Zibulevski, M.: Penalty/Barrier multiplier methods for convex programming problems. J. Optim. 7, 347–366 (1997)
Ben-Tal, A., Margalit, T., Nemirovski, A.: The ordered subsets mirror descent optimization method with applications to tomography. J. Optim. 12, 79–108 (2001)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Hassan, S.N.H.B., Niimi, T. & Yamashita, N. Augmented Lagrangian Method with Alternating Constraints for Nonlinear Optimization Problems. J Optim Theory Appl 181, 883–904 (2019). https://doi.org/10.1007/s10957-019-01488-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-019-01488-w