Abstract
Discretization algorithms for semiinfinite minimax problems replace the original problem, containing an infinite number of functions, by an approximation involving a finite number, and then solve the resulting approximate problem. The approximation gives rise to a discretization error, and suboptimal solution of the approximate problem gives rise to an optimization error. Accounting for both discretization and optimization errors, we determine the rate of convergence of discretization algorithms, as a computing budget tends to infinity. We find that the rate of convergence depends on the class of optimization algorithms used to solve the approximate problem as well as the policy for selecting discretization level and number of optimization iterations. We construct optimal policies that achieve the best possible rate of convergence and find that, under certain circumstances, the better rate is obtained by inexpensive gradient methods.
Similar content being viewed by others
Notes
Iterates may depend on quantities, such as algorithm parameters and the initial point used. In this paper, we view the specification of such quantities as part of the algorithm and, therefore, do not reference them directly.
References
Rustem, B., Howe, M.: Algorithms for Worst-Case Design and Applications to Risk Management. Princeton University Press, Princeton (2002)
Ben-Tal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton University Press, Princeton (2009)
Bertsimas, D., Brown, D.B., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53, 464–501 (2011)
Hettich, R., Kortanek, K.O.: Semi-infinite programming: theory, methods, and applications. SIAM Rev. 35(3), 380–429 (1993)
Reemtsen, R., Gorner, S.: Numerical methods for semi-infinite programming: a survey. In: Reemtsen, R., Ruckmann, J.-J. (eds.) Semi-Infinite Programming, pp. 195–275. Kluwer Academic, Dordrecht (1998)
Kiwiel, K.C.: Convergence of approximate and incremental subgradient methods for convex optimization. SIAM J. Optim. 14(3), 807–840 (2004)
Lopez, M., Still, G.: Semi-infinite programming. Eur. J. Oper. Res. 180(2), 491–518 (2007)
Qi, L., Ling, C., Tong, X., Zhou, G.: A smoothing projected Newton-type algorithm for semi-infinite programming. Comput. Optim. Appl. 42, 1–30 (2009)
Devolder, O., Glineur, F., Nesterov, Y.: First order methods of smooth convex optimization with inexact oracle. Optim. (Online) (2011)
Polak, E.: Optimization. Algorithms and Consistent Approximations. Springer, New York (1997)
Zhou, J.L., Tits, A.L.: An SQP algorithm for finely discretized continuous minimax problems and other minimax problems with many objective functions. SIAM J. Optim. 6(2), 461–487 (1996)
Pee, E.Y., Royset, J.O.: On solving large-scale finite minimax problems using exponential smoothing. J. Optim. Theory Appl. 148(2), 390–421 (2011)
Shapiro, A.: Semi-infinite programming, duality, discretization and optimality conditions. Optimization 58(2), 133–161 (2009)
Still, G.: Discretization in semi-infinite programming: the rate of convergence. Math. Program. 91, 53–69 (2001)
Polak, E., He, L.: Rate-preserving discretization strategies for semi-infinite programming and optimal control. SIAM J. Control Optim. 30(3), 548–572 (1992)
Polak, E., Mayne, D.Q., Higgins, J.: On the extension of Newton’s method to semi-infinite minimax problems. SIAM J. Control Optim. 30(2), 376–389 (1992)
Chen, C.H., Fu, M., Shi, L.: Simulation and optimization. In: Tutorials in Operations Research, pp. 247–260. INFORMS, Hanover (2008)
Chia, Y.L., Glynn, P.W.: Optimal convergence rate for random search. In: Proceedings of the 2007 INFORMS Simulation Society Research Workshop (2007). www.informs-sim.org/2007informs-csworkshop/2007workshop.html
He, D., Lee, L.H., Chen, C.-H., Fu, M.C., Wasserkrug, S.: Simulation optimization using the cross-entropy method with optimal computing budget allocation. ACM Trans. Model. Comput. Simul. 20(1), 133–161 (2010)
Pasupathy, R.: On choosing parameters in retrospective-approximation algorithms for stochastic root finding and simulation optimization. Oper. Res. 58, 889–901 (2010)
Royset, J.O., Szechtman, R.: Optimal budget allocation for sample average approximation (2011). http://faculty.nps.edu/joroyset/docs/RoysetSzechtman.pdf
Lee, S.-H., Glynn, P.W.: Computing the distribution function of a conditional expectation via Monte Carlo: discrete conditioning spaces. ACM Trans. Model. Comput. Simul. 13(3), 238–258 (2003)
Chen, C.H., He, D., Fu, M., Lee, L.H.: Efficient simulation budget allocation for selecting an optimal subset. INFORMS J. Comput. 20(4), 579–595 (2008)
Pee, E.Y.: On algorithms for nonlinear minimax and min-max-min problems and their efficiency. Ph.D. Thesis, Naval Postgraduate School, Monterey, California (2011)
Jung, J.H., O’Leary, D.P., Tits, A.L.: Adaptive constraint reduction for convex quadratic programming. Comput. Optim. Appl. 51(1), 125–157 (2012)
Monteiro, R.D.C., Adler, I.: Interior path following primal-dual algorithms. Part II: convex quadratic programming. Math. Program. 44(1), 43–66 (1989)
Gill, P.E., Hammarling, S.J., Murray, W., Saunders, M.A., Wright, M.H.: User’s guide for LSSOL version 1.0: a Fortran package for constrained linear least-squares and convex quadratic programming. Systems Optimization Laboratory, University of Stanford, Stanford, CA (1986)
Polak, E., Womersley, R.S., Yin, H.X.: An algorithm based on active sets and smoothing for discretized semi-infinite minimax problems. J. Optim. Theory Appl. 138, 311–328 (2008)
Nesterov, Y.: Introductory Lectures on Convex Optimization. Kluwer Academic, Boston (2004)
Kort, B.W., Bertsekas, D.P.: A new penalty function algorithm for constrained minimization. In: Proceedings 1972 IEEE Conference Decision and Control, pp. 343–362 (1972)
Li, X.: An entropy-based aggregate method for minimax optimization. Eng. Optim. 18, 277–285 (1992)
Xu, S.: Smoothing method for minimax problems. Comput. Optim. Appl. 20, 267–279 (2001)
Polak, E., Royset, J.O., Womersley, R.S.: Algorithms with adaptive smoothing for finite minimax problems. J. Optim. Theory Appl. 119(3), 459–484 (2003)
Ye, F., Liu, H., Zhou, S., Liu, S.: A smoothing trust-region Newton-CG method for minimax problem. Appl. Math. Comput. 199(2), 581–589 (2008)
Polak, E.: Smoothing techniques for the solution of finite and semiinfinite min-max-min problems. In: Pillo, G.D., Murli, A. (eds.) High Performance Algorithms and Software for Nonlinear Optimization. Kluwer Academic, Dordrecht (2003)
Acknowledgements
J.O. Royset acknowledges support from AFOSR Young Investigator and Optimization & Discrete Math. Programs.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Royset, J.O., Pee, E.Y. Rate of Convergence Analysis of Discretization and Smoothing Algorithms for Semiinfinite Minimax Problems. J Optim Theory Appl 155, 855–882 (2012). https://doi.org/10.1007/s10957-012-0109-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-012-0109-3