Skip to main content

A Bayesian Restarting Approach to Algorithm Selection

  • Conference paper
  • First Online:
Book cover Simulated Evolution and Learning (SEAL 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10593))

Included in the following conference series:

  • 3143 Accesses

Abstract

A Bayesian algorithm selection framework for black box optimization problems is proposed. A set of benchmark problems is used for training. The performance of a set of algorithms on the problems is recorded. In the beginning, an algorithm is randomly selected to run on the given unknown problem. A Bayesian approach is used to measure the similarity between problems. The most similar problem to the given problem is selected. Then the best algorithm for solving it is suggested for the second run. The process repeats until n algorithms have been run. The best solution out of n runs is recorded. We have experimentally evaluated the property and performance of the framework. Conclusions are (1) it can identify the most similar problem efficiently; (2) it benefits from a restart mechanism. It performs better when more knowledge is learned. Thus it is a good algorithm selection framework.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Mitchell, M.: An Introduction to Genetic Algorithms. MIT Press, Cambridge (1998)

    MATH  Google Scholar 

  2. Auger, A.: Convergence results for the (1, λ)-SA-ES using the theory of ϕ-irreducible Markov chains. Theoret. Comput. Sci. 334(1–3), 35–69 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  3. Karaboga, D.: An idea based on honey bee swarm for numerical optimization. Technical report-tr06, Engineering Faculty, Computer Engineering Department, Erciyes University (2005)

    Google Scholar 

  4. Storn, R., Kenneth, P.: Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11(4), 341–359 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  5. Kennedy, J.: Particle swarm optimization. In: Sammut, C., Webb, G.I. (eds.) Encyclopedia of Machine Learning, pp. 760–766. Springer, New York (2011). doi:10.1007/978-0-387-30164-8_630

    Google Scholar 

  6. Wang, Y., Cai, Z., Zhang, Q.: Differential evolution with composite trial vector generation strategies and control parameters. IEEE Trans. Evol. Comput. 15(1), 55–66 (2011)

    Article  Google Scholar 

  7. Qin, A.K., Huang, V.L., Suganthan, P.N.: Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evol. Comput. 13(2), 398–417 (2009)

    Article  Google Scholar 

  8. Particle Swarm Central: http://www.particleswarm.info. Accessed 04 Nov 2016

  9. Zambrano-Bigiarini, M., Clerc, M., Rojas, R.: Standard particle swarm optimization 2011 at CEC-2013: a baseline for future PSO improvements. In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 2337–2344 (2013)

    Google Scholar 

  10. Liang, J.J., Qin, A.K., Suganthan, P.N., Baskar, S.: Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 10(3), 281–295 (2006)

    Article  Google Scholar 

  11. Hansen, N.: The CMA evolution strategy: a comparing review. In: Lozano, J.A., Larrañaga, P., Inza, I., Bengoetxea, E. (eds.) Towards a New Evolutionary Computation. SFSC, vol. 192, pp. 75–102. Springer, Heidelberg (2006). doi:10.1007/3-540-32494-1_4

    Chapter  Google Scholar 

  12. Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)

    Article  Google Scholar 

  13. CEC Benchmarks. http://www.ntu.edu.sg/home/epnsugan/index_files/cec-benchmarking.htm. Accessed 04 Nov 2016

  14. BBOB Benchmarks. http://coco.gforge.inria.fr/doku.php. Accessed 04 Nov 2016

  15. Burke, E.K., Gendreau, M., Hyde, M., et al.: Hyper-heuristics: a survey of the state of the art. J. Oper. Res. Soc. 64(12), 1695–1724 (2013)

    Article  Google Scholar 

  16. Tang, K., Peng, F., Chen, G., Yao, X.: Population-based algorithm portfolios with automated constituent algorithms selection. Inf. Sci. 279, 94–104 (2014)

    Article  Google Scholar 

  17. Yuen, S.Y., Zhang, X.: On composing an algorithm portfolio. Memet. Comput. 7, 203–214 (2015)

    Article  Google Scholar 

  18. Kotthoff, L.: Algorithm selection for combinatorial search problems: a survey. AI Mag. 35, 48–60 (2014)

    Article  Google Scholar 

  19. Rice, J.R.: The algorithm selection problem. Adv. Comput. 15, 65–118 (1976)

    Article  Google Scholar 

  20. Muñoz, M.A., Sun, Y., Kirley, M., Halgamuge, S.K.: Algorithm selection for black-box continuous optimization problems: a survey on methods and challenges. Inf. Sci. 317, 224–245 (2015)

    Article  Google Scholar 

  21. Bischl, B., Mersmann, O., Trautmann, H., Preuss, M.: Algorithm selection based on ELA and cost-sensitive learning. In: Proceedings of International Conference on Genetic and Evolutionary Computation, pp. 313–320 (2012)

    Google Scholar 

  22. Muñoz, M.A., Kirley, M., Halgamuge, S.K.: Explorative landscape analysis of continuous space optimization problems using information content. IEEE Trans. Evol. Comput. 19, 74–87 (2015)

    Article  Google Scholar 

  23. Kerschke, P., Preuss, M., Wessing, S., Trautmann, H.: Low-budget exploratory landscape analysis on multiple peaks models. In: Proceedings of 2016 on Genetic and Evolutionary Computation Conference, pp. 229–236. ACM (2016)

    Google Scholar 

  24. Yuen, S.Y., Zhang, X., Lou, Y.: Sequential learnable evolutionary algorithm: a research program. In: 2015 IEEE International Conference on Systems, Man, and Cybernetics, Kowloon, pp. 2841–2848 (2015)

    Google Scholar 

  25. Pfahringer, B., Hilan B., Giraud-Carrier, C.: Tell me who can learn you and I can tell you who you are: landmarking various learning algorithms. In: Proceedings of 17th International Conference on Machine Learning, pp. 743–750 (2000)

    Google Scholar 

  26. Auger, A., Hansen, N.: A restart CMA evolution strategy with increasing population size. In: 2005 IEEE Congress on Evolutionary Computation, pp. 1769–1776 (2005)

    Google Scholar 

  27. Sun, J., Garibaldi, J.M., Krasnogor, N., Zhang, Q.: An intelligent multi-restart memetic algorithm for box constrained global optimisation. Evol. Comput. 21(1), 107–147 (2013)

    Article  Google Scholar 

  28. Zhang, J., Sanderson, A.C.: JADE: adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 13(5), 945–958 (2009)

    Article  Google Scholar 

  29. Brest, J., Greiner, S., Bošković, B., Mernik, M., Žumer, V.: Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems. IEEE Trans. Evol. Comput. 10(6), 646–657 (2006)

    Article  Google Scholar 

  30. Eshelman, L.J., Schaffer, J.D.: Real-coded genetic algorithms and interval-schemata. In: Proceedings of International Conference on Genetic Algorithms (ICGA), pp. 187–202 (1992)

    Google Scholar 

  31. Lihu, A., Holban, S., Popescu, O.A.: Real-valued genetic algorithms with disagreements. Memet. Comput. 4(4), 317–325 (2012)

    Article  Google Scholar 

  32. Muñoz, M.A., Kirley, M.: ICARUS: identification of complementary algorithms by uncovered sets. In: 2016 IEEE Congress on Evolutionary Computation (CEC), pp. 2427–2432 (2016)

    Google Scholar 

  33. Hansen, N.: Benchmarking a BI-population CMA-ES on the BBOB-2009 function testbed. In: Proceedings of 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers, pp. 2389–2396. ACM (2009)

    Google Scholar 

  34. Tanabe, R., Fukunaga, A.S.: Improving the search performance of SHADE using linear population size reduction. In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 1658–1665 (2014)

    Google Scholar 

Download references

Acknowledgement

The work described in this paper was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China [Project No. CityU 125313]. Yaodong He acknowledges the Institutional Postgraduate Studentship from City University of Hong Kong. Yang Lou acknowledges the Institutional Postgraduate Studentship and the Institutional Research Tuition Grant from City University of Hong Kong.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yaodong He .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

He, Y., Yuen, S.Y., Lou, Y. (2017). A Bayesian Restarting Approach to Algorithm Selection. In: Shi, Y., et al. Simulated Evolution and Learning. SEAL 2017. Lecture Notes in Computer Science(), vol 10593. Springer, Cham. https://doi.org/10.1007/978-3-319-68759-9_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-68759-9_33

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-68758-2

  • Online ISBN: 978-3-319-68759-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics