Skip to main content

PROGRESS: Progressive Reinforcement-Learning-Based Surrogate Selection

  • Conference paper
  • First Online:
Learning and Intelligent Optimization (LION 2013)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7997))

Included in the following conference series:

Abstract

In most engineering problems, experiments for evaluating the performance of different setups are time consuming, expensive, or even both. Therefore, sequential experimental designs have become an indispensable technique for optimizing the objective functions of these problems. In this context, most of the problems can be considered as a black-box. Specifically, no function properties are known a priori to select the best suited surrogate model class. Therefore, we propose a new ensemble-based approach, which is capable of identifying the best surrogate model during the optimization process by using reinforcement learning techniques. The procedure is general and can be applied to arbitrary ensembles of surrogate models. Results are provided on 24 well-known black-box functions to show that the progressive procedure is capable of selecting suitable models from the ensemble and that it can compete with state-of-the-art methods for sequential optimization.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The weak global structure of this function is rooted in the existence of two basins of almost the same size.

References

  1. Bartz-Beielstein, T., Lasarczyk, C.G., Preuss, M.: Sequential parameter optimization. In: McKay, B., et al. (eds.) Proceedings of the 2005 Congress on Evolutionary Computation (CEC’05), Edinburgh, Scotland, pp. 773–780. IEEE Press, Los Alamitos (2005)

    Chapter  Google Scholar 

  2. Bischl, B., Lang, M., Mersmann, O., Rahnenfuehrer, J., Weihs, C.: BatchJobs and BatchExperiments: abstraction mechanisms for using R in batch environments. Submitted to Journal of Statistical Software (2012a)

    Google Scholar 

  3. Bischl, B., Mersmann, O., Trautmann, H., Weihs, C.: Resampling methods for meta-model validation with recommendations for evolutionary computation. Evol. Comput. 20(2), 249–275 (2012b)

    Article  Google Scholar 

  4. Bursztyn, D., Steinberg, D.M.: Comparison of designs for computer experiments. J. Stat. Planning Infer. 136(3), 1103–1119 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  5. DaCosta, L., Fialho, A., Schoenauer, M., Sebag, M.: Adaptive operator selection with dynamic multi-armed bandits. In: Proceedings of the 10th Conference Genetic and Evolutionary Computation (GECCO ’08), pp. 913–920. ACM, New York (2008)

    Google Scholar 

  6. Friedman, J.: Multivariate adaptive regression splines. Ann. Stat. 19(1), 1–67 (1991)

    Article  MATH  Google Scholar 

  7. Friedman, J.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29(5), 1189–1232 (2001)

    Article  MATH  Google Scholar 

  8. Friese, M., Zaefferer, M., Bartz-Beielstein, T., Flasch, O., Koch, P., Konen, W., Naujoks, B.: Ensemble based optimization and tuning algorithms. In: Hoffmann, F., Hüllermeier, E. (eds.) Proceedings of the 21. Workshop Computational Intelligence, pp. 119–134 (2011)

    Google Scholar 

  9. Ginsbourger, D., Helbert, C., Carraro, L.: Discrete mixtures of kernels for kriging-based optimization. Qual. Reliab. Eng. Int. 24(6), 681–691 (2008)

    Article  Google Scholar 

  10. Goel, T., Haftka, R.T., Shyy, W., Queipo, N.V.: Ensemble of surrogates. Struct. Multidisc. Optim. 33(3), 199–216 (2007)

    Article  Google Scholar 

  11. Gorissen, D., Dhaene, T., Turck, F.: Evolutionary model type selection for global surrogate modeling. J. Mach. Learn. Res. 10, 2039–2078 (2009)

    MathSciNet  MATH  Google Scholar 

  12. Hansen, L., Salamon, P.: Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990)

    Article  Google Scholar 

  13. Hansen, N., Finck, S., Ros, R., Auger, A.: Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions. Tech. Rep. RR-6829, INRIA (2009). http://hal.inria.fr/inria-00362633/en/

  14. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer Series in Statistics. Springer, New York (2009)

    Book  Google Scholar 

  15. Jones, D.R.: A taxonomy of global optimization methods based on response surfaces. J. Global Optim. 21(4), 345–383 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  16. Jones, D., Schonlau, M., Welch, W.: Efficient global optimization of expensive black-box functions. J. Global Optim. 13(4), 455–492 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  17. Kass, R.E., Raftery, A.E.: Bayes factors. J. Am. Stat. Assoc. 90(430), 773–795 (1995)

    Article  MATH  Google Scholar 

  18. Lenth, R.V.: Response-surface methods in R, using rsm. J. Stat. Softw. 32(7), 1–17 (2009)

    Google Scholar 

  19. Liaw, A., Wiener, M.: Classification and regression by randomForest. R News 2(3), 18–22 (2002)

    Google Scholar 

  20. Lim, D., Ong, Y.S., Jin, Y., Sendhoff, B.: A study on metamodeling techniques, ensembles, and multi-surrogates in evolutionary computation. In: Thierens, D., et al. (eds.) Proceedings of the 9th Annual Genetic and Evolutionary Computation Conference (GECCO 2007), pp. 1288–1295. ACM, New York (2007)

    Chapter  Google Scholar 

  21. Mersmann, O., Bischl, B.: soobench: Single Objective Optimization Benchmark Functions (2012). http://CRAN.R-project.org/package=soobench, R package version 1.0-73

  22. Milborrow, S.: earth: Multivariate Adaptive Regression Spline Models (2012). http://CRAN.R-project.org/package=earth, R package version 3.2-3

  23. Mockus, J.B., Tiesis, V., Zilinskas, A.: The application of bayesian methods for seeking the extremum. In: Dixon, L.C.W., Szegö, G.P. (eds.) Towards Global Optimization 2, pp. 117–129. Elsevier North-Holland, New York (1978)

    Google Scholar 

  24. Myers, R.H., Montgomery, D.C., Anderson-Cook, C.M.: Response Surface Methodology, 3rd edn. Wiley, Hoboken (2009)

    MATH  Google Scholar 

  25. Picheny, V., Wagner, T., Ginsbourger, D.: A benchmark of kriging-based infill criteria for noisy optimization. Struct. Multidisc. Optim. 48(3), 607–626 (2013)

    Article  Google Scholar 

  26. R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna (2012). http://www.R-project.org/ ISBN 3-900051-07-0

  27. Ridgeway, G.: gbm: Generalized Boosted Regression Models (2012). http://CRAN.R-project.org/package=gbm, R package version 1.6-3.2

  28. Roustant, O., Ginsbourger, D., Deville, Y.: DiceKriging, DiceOptim: two R packages for the analysis of computer experiments by kriging-based metamodeling and optimization. J. Stat. Softw. 51(1), 1–55 (2012). http://www.jstatsoft.org/v51/i01/

    Google Scholar 

  29. Santner, T., Williams, B., Notz, W.: The Sesign and Analysis of Computer Experiments. Springer, New York (2003)

    Book  Google Scholar 

  30. Sasena, M.J., Papalambros, P., Goovaerts, P.: Exploration of metamodeling sampling criteria for constrained global optimization. Eng. Optim. 34(3), 263–278 (2002)

    Article  Google Scholar 

  31. Shan, S., Wang, G.G.: Survey of modeling and optimization strategies to solve high-dimensional design problems with computationally-expensive black-box functions. Struct. Multi. Optim. 41(2), 219–241 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  32. Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. Cambridge University Press, Cambridge (1998)

    Google Scholar 

  33. Therneau, T.M., port by Brian Ripley, B.A.R.: rpart: Recursive Partitioning (2012). http://CRAN.R-project.org/package=rpart, R package version 3.1-54

  34. Venables, W.N., Ripley, B.D.: Modern Applied Statistics with S, 4th edn. Springer, New York (2002)

    Book  MATH  Google Scholar 

  35. Viana, F.A.C.: Multiple Surrogates for Prediction and Optimization. Ph.D. thesis, University of Florida (2011)

    Google Scholar 

  36. Wagner, T., Emmerich, M., Deutz, A., Ponweiser, W.: On expected-improvement criteria for model-based multi-objective optimization. In: Schaefer, R., Cotta, C., Kołodziej, J., Rudolph, G. (eds.) PPSN XI. LNCS, vol. 6238, pp. 718–727. Springer, Heidelberg (2010)

    Google Scholar 

  37. Wichard, J.D.: Model selection in an ensemble framework. In: International Joint Conference on Neural Networks, pp. 2187–2192 (2006)

    Google Scholar 

Download references

Acknowledgements.

This paper is based on investigations of the project D5 of the Collaborative Research Center SFB/TR TRR 30 and of the project C2 of the Collaborative Research Center SFB 823, which are kindly supported by the Deutsche Forschungsgemeinschaft (DFG).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stefan Hess .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hess, S., Wagner, T., Bischl, B. (2013). PROGRESS: Progressive Reinforcement-Learning-Based Surrogate Selection. In: Nicosia, G., Pardalos, P. (eds) Learning and Intelligent Optimization. LION 2013. Lecture Notes in Computer Science(), vol 7997. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-44973-4_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-44973-4_13

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-44972-7

  • Online ISBN: 978-3-642-44973-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics