Skip to main content
Log in

Adaptive Sampling line search for local stochastic optimization with integer variables

  • Full Length Paper
  • Series B
  • Published:
Mathematical Programming Submit manuscript

Abstract

We consider optimization problems with an objective function that is estimable using a Monte Carlo oracle, constraint functions that are known deterministically through a constraint-satisfaction oracle, and integer decision variables. Seeking an appropriately defined local minimum, we propose an iterative adaptive sampling algorithm that, during each iteration, performs a statistical local optimality test followed by a line search when the test detects a stochastic descent direction. We prove a number of results. First, the true function values at the iterates generated by the algorithm form an almost-supermartingale process, and the iterates are absorbed with probability one into the set of local minima in finite time. Second, such absorption happens exponentially fast in iteration number and in oracle calls. This result is analogous to non-standard rate guarantees in stochastic continuous optimization contexts that involve sharp minima. Third, the oracle complexity of the proposed algorithm increases linearly in the dimensionality of the local neighborhood. As a solver, primarily due to combining line searches that use common random numbers with statistical tests for local optimality, the proposed algorithm is effective on a variety of problems. We illustrate such performance using three problem suites, on problems ranging from 25 to 200 dimensions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Abhishek, K., Leyffer, S., Linderoth, J.: A mixed-integer nonlinear program for the optimization of thermal insulation systems. Opt. Eng. 11(2), 185–212 (2010)

    Article  MATH  Google Scholar 

  2. Abramson, M., Audet, C., Chrissis, J., Walston, J.: Mesh adaptive direct search algorithms for mixed variable optimization. Opt. Letters 3(1), 35–47 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  3. Asmussen, S., Glynn, P.W.: Stochastic Simulation: Algorithms and Analysis, Stochastic modeling and applied probability, vol. 57. Springer, New York (2007)

    Book  MATH  Google Scholar 

  4. Audet, C., Dennis Jr., J.: Pattern search algorithms for mixed variable programming. SIAM J. Opt. 11(3), 573–594 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  5. Billingsley, P.: Probability and Measure, 3rd edn. Wiley, New York (1995)

    MATH  Google Scholar 

  6. Bonami, P., Biegler, L., Conn, A., Cornuéjols, G., Grossmann, I., Laird, C., Lee, J., Lodi, A., Margot, F., Sawaya, N., Wächter, A.: An algorithmic framework for convex mixed integer nonlinear programs. Discrete Opt. 5(2), 186–204 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bottou, L., Curtis, F.E., Nocedal, J.: Optimization methods for large-scale machine learning. SIAM Rev. 60(2), 223–311 (2018). https://doi.org/10.1137/16M1080173

    Article  MathSciNet  MATH  Google Scholar 

  8. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, New York (2004)

    Book  MATH  Google Scholar 

  9. Chen, C.H., Lin, J., Yücesan, E., Chick, S.E.: Simulation budget allocation for further enhancing the efficiency of ordinal optimization. Dis. Event Dyn. Syst. 10(3), 251–270 (2000). https://doi.org/10.1023/A:1008349927281

    Article  MathSciNet  MATH  Google Scholar 

  10. Eubank, S., Guclu, H., Kumar, V.S.A., Marathe, M.V., Srinivasan, A., Toroczkai, Z., Wang, N.: Modelling disease outbreaks in realistic urban social networks. Nature 429, 180–184 (2004)

    Article  Google Scholar 

  11. Gosavi, A., Ozkaya, E., Kahraman, A.F.: Simulation optimization for revenue management of airlines with cancellations and overbooking. OR Spect. 29(1), 21–38 (2007). https://doi.org/10.1007/s00291-005-0018-z

    Article  MATH  Google Scholar 

  12. Hashemi, F., Ghosh, S., Pasupathy, R.: On adaptive sampling rules for stochastic recursions. In: A. Tolk, S.Y. Diallo, I.O. Ryzhov, L. Yilmaz, S. Buckley, J.A. Miller (eds.) Proceedings of the 2014 Winter Simulation Conference, pp. 3959–3970. IEEE, Piscataway, NJ (2014). https://doi.org/10.1109/WSC.2014.7020221

  13. Henderson, S.G., Pasupathy, R.: Simulation optimization library (2021). https://github.com/simopt-admin/simopt/wiki

  14. Hong, L.J., Nelson, B.L.: Discrete optimization via simulation using COMPASS. Oper. Res. 54(1), 115–129 (2006). https://doi.org/10.1287/opre.1050.0237

    Article  MATH  Google Scholar 

  15. Hu, L., Andradottir, S.: An asymptotically optimal set approach for simulation optimization. INFORMS J. Comput. (2018). https://doi.org/10.1287/ijoc.2018.0811

    Article  MATH  Google Scholar 

  16. Hunter, S.R., Nelson, B.L.: Parallel ranking and selection. In: A. Tolk, J. Fowler, G. Shao, E. Yücesan (eds.) Advances in Modeling and Simulation: Seminal Research from 50 Years of Winter Simulation Conferences, Simulation Foundations, Methods and Applications, chap. 12, pp. 249–275. Springer International, Switzerland (2017). https://doi.org/10.1007/978-3-319-64182-9

  17. Jalali, H., Van Nieuwenhuyse, I.: Simulation optimization in inventory replenishment: a classification. IIE Trans. 47(11), 1217–1235 (2015). https://doi.org/10.1080/0740817X.2015.1019162

    Article  Google Scholar 

  18. Johnson, N.L., Kotz, S., Balakrishnan, N.: Continuous Multivariate Distributions. Wiley, New York (2000)

    MATH  Google Scholar 

  19. Karatzas, I., Yor, M., Embrechts, P.: Modelling Extremal Events: for Insurance and Finance, vol. 33. Springer, Berlin (1997)

    Google Scholar 

  20. Kim, S., Pasupathy, R., Henderson, S.G.: A guide to SAA. In: M. Fu (ed.) Encyclopedia of Operations Research and Management Science, Hillier and Lieberman OR Series. Elsevier (2014)

  21. Kim, S.H., Nelson, B.L.: Selecting the best system. In: Henderson, S.G., Nelson, B.L. (eds.) Simulation, Handbooks in Operations Research and Management Science, vol. 13, pp. 501–534. Elsevier, Amsterdam (2006)

    Google Scholar 

  22. Kleywegt, A.J., Shapiro, A., Homem-de-Mello, T.: The sample average approximation method for stochastic discrete optimization. SIAM J. Opt. 12, 479–502 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  23. Kushner, H., Yin, G.G.: Stochastic approximation and recursive algorithms and applications. Stochastic Model. Appl. Prob. (2003). https://doi.org/10.1007/b97441

    Article  MATH  Google Scholar 

  24. Lamiri, M., Grimaud, F., Xie, X.: Optimization methods for a stochastic surgery planning problem. Int. J. Prod. Econ. 120(2), 400–410 (2009). https://doi.org/10.1016/j.ijpe.2008.11.021

    Article  Google Scholar 

  25. Law, A.M.: Simulation Modeling and Analysis, 5th edn. McGraw Hill Education, New York (2015)

    Google Scholar 

  26. Le Digabel, S., Wild, S.M.: A taxonomy of constraints in simulation-based optimization. arXiv (2015). arxiv:1505.07881

  27. Lee, S., Nelson, B.L.: General-purpose ranking and selection for computer simulation. IIE Trans 48(6), 555–564 (2016)

    Article  Google Scholar 

  28. Li, P., Abbas, M., Pasupathy, R., Head, L.: Simulation-based optimization of maximum green setting under retrospective approximation framework. Transport. Res. 2192, 1–10 (2010). https://doi.org/10.3141/2192-01

    Article  Google Scholar 

  29. Liuzzi, G., Lucidi, S., Rinaldi, F.: Derivative-free methods for bound constrained mixed-integer optimization. Dis. Opt. 53(2), 505–526 (2011)

    MathSciNet  MATH  Google Scholar 

  30. Liuzzi, G., Lucidi, S., Rinaldi, F.: Derivative-free methods for mixed-integer constrained optimization problems. J. Opt. Theory Appl. 164(3), 933–965 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  31. Lorenz, A.J., Chao, S., Asoro, F.G., Heffner, E.L., Hayashi, T., Iwata, H., Smith, K.P., Sorrells, M.E., Jannink, J.L.: Genomic selection in plant breeding: Knowledge and prospects. In: Sparks, D.L. (ed.) Advances in Agronomy, vol. 110, pp. 77–123. Academic Press, Elsevier, San Diego, CA (2011)

    Google Scholar 

  32. Mahajan, S., van Ryzin, G.: Stocking retail assortments under dynamic consumer substitution. Oper. Res. 49(3), 334–351 (2001)

    Article  MATH  Google Scholar 

  33. Murota, K.: Discrete convex analysis. SIAM, Philadelphia (2003)

    Book  MATH  Google Scholar 

  34. Nesterov, Y.: Introductory lectures on convex optimization: A basic course, Applied Optimization, vol. 87. Springer (2004). https://doi.org/10.1007/978-1-4419-8853-9

  35. Newton, D., Yousefian, F., Pasupathy, R.: Stochastic Gradient Descent: Recent Trends, chap. 9, pp. 193–220. INFORMS (2018). https://doi.org/10.1287/educ.2018.0191

  36. Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer Series in Operations Research and Financial Engineering. Springer, New York (2006)

    MATH  Google Scholar 

  37. Pasupathy, R., Ghosh, S.: Simulation optimization: a concise overview and implementation guide. In: H. Topaloglu (ed.) TutORials in Operations Research, chap. 7, pp. 122–150. INFORMS, Catonsville, MD (2013). https://doi.org/10.1287/educ.2013.0118

  38. Pasupathy, R., Henderson, S.G.: A testbed of simulation-optimization problems. In: L.F. Perrone, F.P. Wieland, J. Liu, B.G. Lawson, D.M. Nicol, R.M. Fujimoto (eds.) Proceedings of the 2006 Winter Simulation Conference, pp. 255–263. IEEE, Piscataway, NJ (2006). https://doi.org/10.1109/WSC.2006.323081

  39. Pasupathy, R., Henderson, S.G.: SimOpt: A library of simulation optimization problems. In: S. Jain, R.R. Creasey, J. Himmelspach, K.P. White, M. Fu (eds.) Proceedings of the 2011 Winter Simulation Conference, pp. 4075–4085. IEEE, Piscataway, NJ (2011). https://doi.org/10.1109/WSC.2011.6148097

  40. Pasupathy, R., Schmeiser, B.W.: Root finding via darts: Dynamic adaptive random target shooting. In: B. Johansson, S. Jain, J. Montoya-Torres, J. Hugan, E. Yücesan (eds.) Proceedings of the 2010 Winter Simulation Conference, pp. 1255–1262. Institute of Electrical and Electronics Engineers, Inc., Piscataway, NJ (2010). http://www.informs-sim.org/wsc10papers/115.pdf

  41. Pasupathy, R., Hunter, S.R., Pujowidianto, N.A., Lee, L.H., Chen, C.: Stochastically constrained ranking and selection via SCORE. ACM Trans.Model. Comput. Sim. 25(1), 1–26 (2015). https://doi.org/10.1145/2630066

    Article  MathSciNet  MATH  Google Scholar 

  42. Pasupathy, R., Glynn, P.W., Ghosh, S., Hashemi, F.: On sampling rates in simulation-based recursions. SIAM J. Opt. 28(1), 45–73 (2018). https://doi.org/10.1137/140951679

    Article  MathSciNet  MATH  Google Scholar 

  43. Robbins, H., Siegmund, D.: A convergence theorem for non negative almost super-martingales and some applications. In: J.S. Rustagi (ed.) Optimizing Methods in Statistics, pp. 233–257. Academic Press (1971). https://doi.org/10.1016/B978-0-12-604550-5.50015-8

  44. Salemi, P.L., Song, E., Nelson, B.L., Staum, J.: Gaussian Markov random fields for discrete optimization via simulation: Framework and algorithms. Oper. Res. 67(1), 250–266 (2019). https://doi.org/10.1287/opre.2018.1778

    Article  MathSciNet  MATH  Google Scholar 

  45. Shapiro, A., Dentcheva, D., Ruszczyński, A.: Lectures on Stochastic Programming: Modeling and Theory. MPS-SIAM Series on Optimization. Society for Industrial and Applied Mathematics, Philadelphia, PA (2009)

    Book  MATH  Google Scholar 

  46. Shashaani, S., Hashemi, F.S., Pasupathy, R.: ASTRO-DF: A class of adaptive sampling trust-region algorithms for derivative-free simulation optimization. SIAM J. Opt. 28(4), 3145–3176 (2018)

    Article  MATH  Google Scholar 

  47. Shin, D., Broadie, M., Zeevi, A.: Tractable sampling strategies for ordinal optimization. Oper. Res. 66(6), 1693–1712 (2018). https://doi.org/10.1287/opre.2018.1753

    Article  MathSciNet  MATH  Google Scholar 

  48. Sun, L., Hong, L.J., Hu, Z.: Balancing exploitation and exploration in discrete optimization via simulation through a gaussian process-based search. Oper. Res. 62(6), 1416–1438 (2014). https://doi.org/10.1287/opre.2014.1315

    Article  MathSciNet  MATH  Google Scholar 

  49. Vershynin, R.: High-Dimensional Probability: An Introduction with Applications in Data Science, Cambridge Series in Statistical and Probabilistic Mathematics, vol. 47. Cambridge University Press, Cambridge, UK (2018)

    MATH  Google Scholar 

  50. Wang, H., Pasupathy, R., Schmeiser, B.W.: Integer-ordered simulation optimization using R-SPLINE: Retrospective Search using Piecewise-Linear Interpolation and Neighborhood Enumeration. ACM Transactions on Modeling and Computer Simulation 23(3), 17:1–17:24 (2013). https://doi.org/10.1145/2499913.2499916

  51. Williams, D.: Probability with martingales. Cambridge University Press, Cambridge, UK (1991)

    Book  MATH  Google Scholar 

  52. Xu, J., Nelson, B.L., Hong, L.J.: Industrial Strength COMPASS: A comprehensive algorithm and software for optimization via simulation. ACM Trans.Model. Comput. Simul. 20, 1–29 (2010)

    Article  MATH  Google Scholar 

  53. Zhang, H., Zheng, Z., Lavaei, J.: Discrete convex simulation optimization (2020)

  54. Zhang, H., Zheng, Z., Lavaei, J.: Stochastic localization methods for discrete convex simulation optimization (2020)

Download references

Acknowledgements

The third author fondly remembers his personal and research interactions with Shabbir Ahmed. Shabbir was an amazing scholar who made fundamental contributions to stochastic programming. The third author also thanks A. Villukanti and S. Venkatramanan at the Biocomplexity Institute, University of Virginia for discussions that led to the incorporation of some ideas within ADALINE, and its name.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Raghu Pasupathy.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

S. R. Hunter thanks the National Science Foundation for support under grant CMMI-1554144. R. Pasupathy thanks the Office of Naval Research for support provided by the grants N000141712295 and 13000991.

Appendices

Procedure LI: line search

Procedure LI, listed in Algorithm 3, is straightforward. Starting with the candidate next iterate \(\widetilde{X}_k\), LI successively observes objective estimates obtained with sample size \(M_k\) at points that are “closest” to the line \(\widetilde{X}_k + t \hat{d}_k, t \in \mathbb {R}\), as long as the observed objective estimates are monotone decreasing. More precisely, given the starting point \(W_{0}{:}{=}\widetilde{X}_k\) and direction \(\hat{d}_k\), LI obtains objective estimates at points \(W_{\ell } {:}{=}{{\,\mathrm{\arg \!\min }\,}}\bigl \{ \Vert x - (\widetilde{X}_k + 2^{\ell -1}\, s_0 \, \hat{d}_k) \Vert :x \in \mathcal {X}\setminus \{\widetilde{X}_k\} \bigr \},\) \(\ell = 1,2,3,\ldots \) where \(s_0\) is a fixed constant that defaults to \(s_0 = 1\). The \({{\,\mathrm{\arg \!\min }\,}}\) operation is computationally trivial since the neighbors of the point \(\widetilde{X}_k + t \hat{d}_k\) can be obtained by rounding. The line search proceeds as long as the sequence \({\bar{F}}(W_{\ell },M_k), \ell =0,1,2,\ldots \) is non-increasing, or a pre-specified limit on the maximum number of steps in the line search is reached. Finally, Procedure LI performs a simple bisection search to find a better point between the penultimate point and the last point, as the last step size may be large.

figure c

Procedure DA: estimating a descent direction

Procedure DA estimates a descent direction \(\hat{d}_k\) at the candidate iterate or finds that is an estimated local minimizer in its \(\mathcal {N}_1\)-neighborhood. DA performs the following steps:

  1. 1.

    DA enumerates the neighbors of looking for (a) at least \(d+1\) feasible neighbors that form a simplex with volume in d dimensions, and (b) at least one better neighbor. (See Algorithm 4 steps 114.)

  2. 2.

    If there are no better neighbors, DA returns with \(\mathcal {B}\) is true. (See Algorithm 4 step 15.)

  3. 3.

    Otherwise, DA constructs an estimated descent cone and estimated descent direction. (See Algorithm 4 steps 17–21.)

Finally, if DA identifies an estimated better neighbor, it updates the candidate next iterate. As our analysis holds with or without this “hop,” for simplicity, we omit it from Sect. 4.

figure d

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ragavan, P.K., Hunter, S.R., Pasupathy, R. et al. Adaptive Sampling line search for local stochastic optimization with integer variables. Math. Program. 196, 775–804 (2022). https://doi.org/10.1007/s10107-021-01667-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-021-01667-6

Keywords

Mathematics Subject Classification

Navigation