Abstract
A Bayesian algorithm selection framework for black box optimization problems is proposed. A set of benchmark problems is used for training. The performance of a set of algorithms on the problems is recorded. In the beginning, an algorithm is randomly selected to run on the given unknown problem. A Bayesian approach is used to measure the similarity between problems. The most similar problem to the given problem is selected. Then the best algorithm for solving it is suggested for the second run. The process repeats until n algorithms have been run. The best solution out of n runs is recorded. We have experimentally evaluated the property and performance of the framework. Conclusions are (1) it can identify the most similar problem efficiently; (2) it benefits from a restart mechanism. It performs better when more knowledge is learned. Thus it is a good algorithm selection framework.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Mitchell, M.: An Introduction to Genetic Algorithms. MIT Press, Cambridge (1998)
Auger, A.: Convergence results for the (1, λ)-SA-ES using the theory of ϕ-irreducible Markov chains. Theoret. Comput. Sci. 334(1–3), 35–69 (2005)
Karaboga, D.: An idea based on honey bee swarm for numerical optimization. Technical report-tr06, Engineering Faculty, Computer Engineering Department, Erciyes University (2005)
Storn, R., Kenneth, P.: Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11(4), 341–359 (1997)
Kennedy, J.: Particle swarm optimization. In: Sammut, C., Webb, G.I. (eds.) Encyclopedia of Machine Learning, pp. 760–766. Springer, New York (2011). doi:10.1007/978-0-387-30164-8_630
Wang, Y., Cai, Z., Zhang, Q.: Differential evolution with composite trial vector generation strategies and control parameters. IEEE Trans. Evol. Comput. 15(1), 55–66 (2011)
Qin, A.K., Huang, V.L., Suganthan, P.N.: Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evol. Comput. 13(2), 398–417 (2009)
Particle Swarm Central: http://www.particleswarm.info. Accessed 04 Nov 2016
Zambrano-Bigiarini, M., Clerc, M., Rojas, R.: Standard particle swarm optimization 2011 at CEC-2013: a baseline for future PSO improvements. In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 2337–2344 (2013)
Liang, J.J., Qin, A.K., Suganthan, P.N., Baskar, S.: Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 10(3), 281–295 (2006)
Hansen, N.: The CMA evolution strategy: a comparing review. In: Lozano, J.A., Larrañaga, P., Inza, I., Bengoetxea, E. (eds.) Towards a New Evolutionary Computation. SFSC, vol. 192, pp. 75–102. Springer, Heidelberg (2006). doi:10.1007/3-540-32494-1_4
Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)
CEC Benchmarks. http://www.ntu.edu.sg/home/epnsugan/index_files/cec-benchmarking.htm. Accessed 04 Nov 2016
BBOB Benchmarks. http://coco.gforge.inria.fr/doku.php. Accessed 04 Nov 2016
Burke, E.K., Gendreau, M., Hyde, M., et al.: Hyper-heuristics: a survey of the state of the art. J. Oper. Res. Soc. 64(12), 1695–1724 (2013)
Tang, K., Peng, F., Chen, G., Yao, X.: Population-based algorithm portfolios with automated constituent algorithms selection. Inf. Sci. 279, 94–104 (2014)
Yuen, S.Y., Zhang, X.: On composing an algorithm portfolio. Memet. Comput. 7, 203–214 (2015)
Kotthoff, L.: Algorithm selection for combinatorial search problems: a survey. AI Mag. 35, 48–60 (2014)
Rice, J.R.: The algorithm selection problem. Adv. Comput. 15, 65–118 (1976)
Muñoz, M.A., Sun, Y., Kirley, M., Halgamuge, S.K.: Algorithm selection for black-box continuous optimization problems: a survey on methods and challenges. Inf. Sci. 317, 224–245 (2015)
Bischl, B., Mersmann, O., Trautmann, H., Preuss, M.: Algorithm selection based on ELA and cost-sensitive learning. In: Proceedings of International Conference on Genetic and Evolutionary Computation, pp. 313–320 (2012)
Muñoz, M.A., Kirley, M., Halgamuge, S.K.: Explorative landscape analysis of continuous space optimization problems using information content. IEEE Trans. Evol. Comput. 19, 74–87 (2015)
Kerschke, P., Preuss, M., Wessing, S., Trautmann, H.: Low-budget exploratory landscape analysis on multiple peaks models. In: Proceedings of 2016 on Genetic and Evolutionary Computation Conference, pp. 229–236. ACM (2016)
Yuen, S.Y., Zhang, X., Lou, Y.: Sequential learnable evolutionary algorithm: a research program. In: 2015 IEEE International Conference on Systems, Man, and Cybernetics, Kowloon, pp. 2841–2848 (2015)
Pfahringer, B., Hilan B., Giraud-Carrier, C.: Tell me who can learn you and I can tell you who you are: landmarking various learning algorithms. In: Proceedings of 17th International Conference on Machine Learning, pp. 743–750 (2000)
Auger, A., Hansen, N.: A restart CMA evolution strategy with increasing population size. In: 2005 IEEE Congress on Evolutionary Computation, pp. 1769–1776 (2005)
Sun, J., Garibaldi, J.M., Krasnogor, N., Zhang, Q.: An intelligent multi-restart memetic algorithm for box constrained global optimisation. Evol. Comput. 21(1), 107–147 (2013)
Zhang, J., Sanderson, A.C.: JADE: adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 13(5), 945–958 (2009)
Brest, J., Greiner, S., Bošković, B., Mernik, M., Žumer, V.: Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems. IEEE Trans. Evol. Comput. 10(6), 646–657 (2006)
Eshelman, L.J., Schaffer, J.D.: Real-coded genetic algorithms and interval-schemata. In: Proceedings of International Conference on Genetic Algorithms (ICGA), pp. 187–202 (1992)
Lihu, A., Holban, S., Popescu, O.A.: Real-valued genetic algorithms with disagreements. Memet. Comput. 4(4), 317–325 (2012)
Muñoz, M.A., Kirley, M.: ICARUS: identification of complementary algorithms by uncovered sets. In: 2016 IEEE Congress on Evolutionary Computation (CEC), pp. 2427–2432 (2016)
Hansen, N.: Benchmarking a BI-population CMA-ES on the BBOB-2009 function testbed. In: Proceedings of 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers, pp. 2389–2396. ACM (2009)
Tanabe, R., Fukunaga, A.S.: Improving the search performance of SHADE using linear population size reduction. In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 1658–1665 (2014)
Acknowledgement
The work described in this paper was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China [Project No. CityU 125313]. Yaodong He acknowledges the Institutional Postgraduate Studentship from City University of Hong Kong. Yang Lou acknowledges the Institutional Postgraduate Studentship and the Institutional Research Tuition Grant from City University of Hong Kong.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
He, Y., Yuen, S.Y., Lou, Y. (2017). A Bayesian Restarting Approach to Algorithm Selection. In: Shi, Y., et al. Simulated Evolution and Learning. SEAL 2017. Lecture Notes in Computer Science(), vol 10593. Springer, Cham. https://doi.org/10.1007/978-3-319-68759-9_33
Download citation
DOI: https://doi.org/10.1007/978-3-319-68759-9_33
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-68758-2
Online ISBN: 978-3-319-68759-9
eBook Packages: Computer ScienceComputer Science (R0)