Abstract
We present an algorithm for nonlinear minimax optimization subject to linear equality and inequality constraints which requires first order partial derivatives. The algorithm is based on successive linear approximations to the functions defining the problem. The resulting linear subproblems are solved in the minimax sense subject to the linear constraints. This ensures a feasible-point algorithm. Further, we introduce local bounds on the solutions of the linear subproblems, the bounds being adjusted automatically, depending on the quality of the linear approximations. It is proved that the algorithm will always converge to the set of stationary points of the problem, a stationary point being defined in terms of the generalized gradients of the minimax objective function. It is further proved that, under mild regularity conditions, the algorithm is identical to a quadratically convergent Newton iteration in its final stages. We demonstrate the performance of the algorithm by solving a number of numerical examples with up to 50 variables, 163 functions, and 25 constraints. We have also implemented a version of the algorithm which is particularly suited for the solution of restricted approximation problems.
Similar content being viewed by others
References
C. Charalambous, “A unified review of optimization”,IEEE Transactions on Microwave Theory and Techniques MTT-22 (1974) 289–300.
F.H. Clarke, “Generalized gradients and applications”,Transactions of the American Mathematical Society 205 (1975) 247–262.
V.F. Dem'yanov and V.N. Malozemov,Introduction to minimax (Wiley, New York, 1974). [Translated from:Vvedenie v minimaks (Izdatel'stvo “Nauka”, Moscow, 1972).]
K. Madsen, “An algorithm for minimax solution of overdetermined systems of non-linear equations”,Journal of the Institute of Mathematics and its Applications 16 (1975) 321–328.
K. Madsen and H. Schjær-Jacobsen, “Constrained minimax optimization”, Report 77-03, Institute for Numerical Analysis, Technical University of Denmark (Lyngby, 1977).
K. Madsen and H. Schjær-Jacobsen, “FORTRAN subroutines for nonlinear minimax optimization subject to linear constraints”, Report 77-07, Institute for Numerical Analysis, Technical University of Denmark (Lyngby, 1977).
R. Mifflin, “An algorithm for nonsmooth optimization”, Report, School of Organization and Management, Yale University (New Haven 1975).
N. Munksgaard, “A FORTRAN subroutine for the solution of linear programming problems”, Report 77-02, Institute for Numerical Analysis, Technical University of Denmark (Lyngby, 1977).
J.M. Ortega and W.C. Rheinboldt,Iterative solution of nonlinear equations in several variables (Academic Press, New York, 1970).
H. Schjær-Jacobsen and K. Madsen, “Synthesis of nonuniformly spaced arrays using a general nonlinear minimax optimization method”,IEEE Transactions on Antennas and Propagation AP-24 (1976) 501–506.
T.V. Srinivasan and J.W. Bandler, “Practical application of a penalty function approach to constrained minimax optimization”,Computer Aided Design 7 (1975) 221–224.
Author information
Authors and Affiliations
Additional information
This work has been supported by the Danish Natural Science Research Council, Grant No. 511-6874.
Rights and permissions
About this article
Cite this article
Madsen, K., Schjær-Jacobsen, H. Linearly constrained minimax optimization. Mathematical Programming 14, 208–223 (1978). https://doi.org/10.1007/BF01588966
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF01588966