skip to main content
research-article

Machine Learning into Metaheuristics: A Survey and Taxonomy

Published: 13 July 2021 Publication History

Abstract

During the past few years, research in applying machine learning (ML) to design efficient, effective, and robust metaheuristics has become increasingly popular. Many of those machine learning-supported metaheuristics have generated high-quality results and represent state-of-the-art optimization algorithms. Although various appproaches have been proposed, there is a lack of a comprehensive survey and taxonomy on this research topic. In this article, we will investigate different opportunities for using ML into metaheuristics. We define uniformly the various ways synergies that might be achieved. A detailed taxonomy is proposed according to the concerned search component: target optimization problem and low-level and high-level components of metaheuristics. Our goal is also to motivate researchers in optimization to include ideas from ML into metaheuristics. We identify some open research issues in this topic that need further in-depth investigations.

References

[1]
B. Adenso-Diaz and M. Laguna. 2006. Fine-tuning of algorithms using fractional experimental designs and local search. Operat. Res. 54, 1 (2006), 99--114
[2]
M. Adibi and J. Shahrabi. 2014. A clustering-based modified variable neighborhood search algorithm for a dynamic job shop scheduling problem. Int. J. Adv. Manufact. Technol. 70, 9 (2014), 1955--1961.
[3]
A. Afanasyeva and M. Buzdalov. 2011. Choosing best fitness function with reinforcement learning. In Proceedings of the 10th International Conference on Machine Learning and Applications, Vol. 2. 354--357.
[4]
M. A. Ahandani. 2016. Opposition-based learning in the shuffled bidirectional differential evolution algorithm. Swarm Evol. Comput. 26 (2016), 64--85.
[5]
O. Aichholzer, F. Aurenhammer, B. Brandstatter, T. Ebner, H. Krasser, C. Magele, M. Muhlmann, and W. Renhart. 2002. Evolution strategy and hierarchical clustering. IEEE Trans. Mag. 38, 2 (2002), 1041--1044.
[6]
M. Ali, M. Pant, and A. Abraham. 2013. Unconventional initialization methods for differential evolution. Appl. Math. Comput. 219, 9 (2013), 4474--4494.
[7]
S. Almahdi and S. Yang. 2019. A constrained portfolio trading system using particle swarm algorithm and recurrent reinforcement learning. Expert Syst. Appl. 130 (2019), 145--156.
[8]
E. Alpaydin. 2014. Introduction to Machine Learning. MIT Press.
[9]
H. B. Amor and A. Rettinger. 2005. Intelligent exploration for genetic algorithms: Using self-organizing maps in evolutionary computation. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation (GECCO’05). 1531--1538.
[10]
J. S. Angelo, E. Krempser, and H. Barbosa. 2019. Performance evaluation of local surrogate models in bilevel optimization. In Proceedings of the International Conference on Machine Learning, Optimization, and Data Science. 347--359.
[11]
K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. Bharath. 2017. Deep reinforcement learning: A brief survey. IEEE Sign. Process. Mag. 34, 6 (2017), 26--38.
[12]
S. Asta, E. Özcan, A. Parkes, and A. Etaner-Uyar. 2013. Generalizing hyper-heuristics via apprenticeship learning. In Proceedings of the European Conference on Evolutionary Computation in Combinatorial Optimization. 169--178.
[13]
S. Baluja. 1994. Population-based Incremental Learning. A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning. Technical Report. Department of Computer Science, Carnegie-Mellon University, Pittsburgh.
[14]
S. Baluja and S. Davies. 1997. Using Optimal Dependency-trees for Combinatorial Optimization: Learning the Structure of the Search Space.Technical Report. Department of Computer Science, Carnegie-Mellon University, Pittsburgh.
[15]
T. Bartz-Beielstein, B. Filipič, P. Korošec, and E-G. Talbi. 2020. High-Performance Simulation-Based Optimization. Springer.
[16]
T. Bartz-Beielstein and S. Markon. 2004. Tuning search algorithms for real-life applications: Regression tree based approach. In Proceedings of the Congress on Evolutionary Computation (CEC’2004). 1111--1118.
[17]
T. Bartz-Beielstein, K. Parsopoulos, and M. N. Vrahatis. 2004. Design and analysis of optimization algorithms using computational statistics. Appl. Numer. Anal. Comput. Math. 1, 2 (2004), 413--433.
[18]
T. Bartz-Beielstein and M. Preuß. 2014. Experimental analysis of optimization algorithms: Tuning and beyond. In Theory and Principled Methods for the Design of Metaheuristics. Springer, 205--245.
[19]
M. H. Bassett, J. F. Pekny, and G. V. Reklaitis. 1996. Decomposition techniques for the solution of large-scale scheduling problems. AIChE J. 42, 12 (1996), 3373--3387.
[20]
M. Basu. 2016. Quasi-oppositional differential evolution for optimal reactive power dispatch. Int. J. Electr. Power Energy Syst. 78 (2016), 29--40.
[21]
I. Bello, H. Pham, Q. V. Le, M. Norouzi, and S. Bengio. 2016. Neural combinatorial optimization with reinforcement learning. arXiv:1611.09940. Retrieved from https://arxiv.org/abs/1611.09940.
[22]
K. P. Bennett and E. Parrado-Hernández. 2006. The interplay of optimization and machine learning research. J. Mach. Learn. Res. 7(Jul.2006), 1265--1281.
[23]
M. Birattari. 2009. Tuning Metaheuristics: A Machine Learning Perspective. Springer.
[24]
B. Bischl, P. Kerschke, L. Kotthoff, M. Lindauer, Y. Malitsky, A. Fréchette, H. Hoos, F. Hutter, K. Leyton-Brown, and K. Tierney. 2016. Aslib: A benchmark library for algorithm selection. Artif. Intell. 237 (2016), 41--58.
[25]
B. Bischl, O. Mersmann, H. Trautmann, and M. Preuß. 2012. Algorithm selection based on exploratory landscape analysis and cost-sensitive learning. In Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation. 313--320.
[26]
L. Bottou, F. Curtis, and J. Nocedal. 2018. Optimization methods for large-scale machine learning. SIAM Rev. 60, 2 (2018), 223--311.
[27]
J. Boyan and A. W. Moore. 2000. Learning evaluation functions to improve optimization by local search. J. Mach. Learn. Res. 1(Nov.2000), 77--112.
[28]
P. Brazdil, C. Giraud-Carrier, C. Soares, and R. Vilalta. 2008. Metalearning: Applications to Data Mining.
[29]
C. Browne, E. Powley, D. Whitehouse, S. Lucas, P. Cowling, P. Rohlfshagen, S. Tavener, D. Perez, S. Samothrakis, and S. Colton. 2012. A survey of monte carlo tree search methods. IEEE Trans. Comput. Intell. AI Games 4, 1 (2012), 1--43.
[30]
D. Buche, N. Schraudolph, and P. Koumoutsakos. 2005. Accelerating evolutionary algorithms with Gaussian process fitness function models. IEEE Trans. Syst. Man Cybernet. C 35, 2 (2005), 183--194.
[31]
L. Bui, H. Abbass, and D. Essam. 2005. Fitness inheritance for noisy evolutionary multi-objective optimization. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation (GECCO’05). 779--785.
[32]
E. Burke, M. Gendreau, M. Hyde, G. Kendall, G. Ochoa, E. Özcan, and R. Qu. 2013. Hyper-heuristics: A survey of the state of the art. J. Operat. Res. Soc. 64, 12 (2013), 1695--1724.
[33]
E. Burke, M. Hyde, G. Kendall, and J. Woodward. 2010. A genetic programming hyper-heuristic approach for evolving 2-D strip packing heuristics. IEEE Trans. Evol. Comput. 14, 6 (2010), 942--958.
[34]
E. Burke, G. Kendall, and E. Soubeiga. 2003. A tabu-search hyperheuristic for timetabling and rostering. J. Heurist. 9, 6 (2003), 451--470.
[35]
X. Cai, L. Gao, X. Li, and H. Qiu. 2019. Surrogate-guided differential evolution algorithm for high dimensional expensive problems. Swarm Evol. Comput. 48 (2019), 288--311.
[36]
L. Calvet, J. de Armas, D. Masip, and A. Juan. 2017. Learnheuristics: Hybridizing metaheuristics with machine learning for optimization with dynamic inputs. Open Math. 15, 1 (2017), 261--280.
[37]
M. Caserta and E. Quiñonez. 2009. A cross entropy-Lagrangean hybrid algorithm for the multi-item capacitated lot-sizing problem with setup times. Comput. Operat. Res. 36, 2 (2009), 530--548.
[38]
D. Catteeuw, M. Drugan, and B. Manderick. Ljubljana, Slovenia, 2014. Guided Restarts Hill-Climbing. In Proceedings of the Annual Conference on Parallel Problem Solving from Nature (PPSN’14). 313--320.
[39]
D. Chafekar, L. Shi, K. Rasheed, and J. Xuan. 2005. Multiobjective GA Optimization Using Reduced Models. IEEE Trans. Syst. Man Cybernet. C 35 (06 2005), 261--265.
[40]
G. Chandrashekar and F. Sahin. 2014. A survey on feature selection methods. Comput. Electr. Eng. 40, 1 (2014), 16--28.
[41]
B. Chen, R. Qu, R. Bai, and W. Laesanklang. 2020. A variable neighborhood search algorithm with reinforcement learning for a real-life periodic vehicle routing problem with time windows and open routes. Rech. Opération. 54 (2020).
[42]
G. Chen et al. 2019. Global and local surrogate-model-assisted differential evolution for waterflooding production optimization. SPE J. (2019).
[43]
M. Chen, Y. Chen, Y. Du, L. Wei, and Y. Chen. 2020. Heuristic algorithms based on deep reinforcement learning for quadratic unconstrained binary optimization. Knowl.-Based Syst. 207 (2020).
[44]
Y. Chen, P. Cowling, F. Polack, and P. Mourdjis. 2016. A multi-arm bandit neighbourhood search for routing and scheduling problems. Research Report. University of York.
[45]
X. Chou, L. Gambardella, and R. Montemanni. 2019. A metaheuristic algorithm for the probabilistic orienteering problem. In Proceedings of the 2nd International Conference on Machine Learning and Machine Intelligence. 30--34.
[46]
T. Chugh, K. Sindhya, J. Hakanen, and K. Miettinen. 2019. A survey on handling computationally expensive multiobjective optimization problems with evolutionary algorithms. Soft Comput. 23, 9 (2019), 3137--3166.
[47]
E. M. Cochrane and J. E. Beasley. 2003. The co-adaptive neural network approach to the Euclidean travelling salesman problem. Neural Netw. 16, 10 (2003), 1499--1525.
[48]
D. J. Cook and R. C. Varnell. 1997. Maximizing the benefits of parallel search using machine learning. In Proceedings of the Annual Conferences on Artificial Intelligence and Innovative Applications of Artificial Intelligence (AAAI/IAAI’97). 559--564.
[49]
D. Corne, C. Dhaenens, and L. Jourdan. 2012. Synergies between operations research and data mining: The emerging use of multi-objective approaches. Eur. J. Operat. Res. 221, 3 (2012), 469--479.
[50]
S. Coy, B. Golden, G. Runger, and E. Wasil. 2001. Using experimental design to find effective parameter settings for heuristics. J. Heurist. 7, 1 (2001), 77--97.
[51]
T. G. Crainic, M. Hewitt, and W. Rei. 2014. Scenario grouping in a progressive hedging-based meta-heuristic for stochastic network design. Comput. Operat. Res. 43 (2014), 90--99.
[52]
B. Crawford, R. Soto, G. Astorga, J. García, C. Castro, and F. Paredes. 2017. Putting continuous metaheuristics to work in binary search spaces. Complexity 2017 (2017), 1--19.
[53]
J-C. Créput and A. Koukam. 2008. The memetic self-organizing map approach to the vehicle routing problem. Soft Comput. 12, 11 (2008), 1125--1141.
[54]
C. Cummins, P. Petoumenos, Z. Wang, and H. Leather. 2017. End-to-end deep learning of optimization heuristics. In Proceedings of the 26th International Conference on Parallel Architectures and Compilation Techniques (PACT’17). 219--232.
[55]
L. DaCosta, A. Fialho, M. Schoenauer, and M. Sebag. 2008. Adaptive operator selection with dynamic multi-armed bandits. In Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation (GECCO’08). 913--920.
[56]
F. Dalboni, L . S. Ochi, and L. Drummond. 2003. On improving evolutionary algorithms by using data mining for the oil collector vehicle routing problem. In Proceedings of the International Network Optimization Conference. 182--188.
[57]
A. Dantas and A. Pozo. 2020. On the use of fitness landscape features in meta-learning based algorithm selection for the quadratic assignment problem. Theor. Comput. Sci. 805 (2020), 62--75.
[58]
J-A. Mejía de Dios and E. Mezura-Montes. 2020. A surrogate-assisted metaheuristic for bilevel optimization. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’20). 629--635.
[59]
M. R. de Holanda, A. Plastino, and U. dos Santos Souza. 2020. MineReduce for the minimum weight vertex cover problem. In Proceedings of the International Conference on Optimization and Learning (OLA’20).
[60]
C. Dhaenens and L. Jourdan. 2016. Metaheuristics for Big Data. John Wiley & Sons.
[61]
F. Dobslaw. 2010. A parameter tuning framework for metaheuristics based on design of experiments and artificial neural networks. In Proceedings of the International Conference on Computer Mathematics and Natural Computing.
[62]
H. Dong and Z. Dong. 2020. Surrogate-assisted Grey wolf optimization for high-dimensional, computationally expensive black-box problems. Swarm Evol. Comput. (2020).
[63]
X. Dong, S. Yu, Z. Wu, and Z. Chen. 2010. A hybrid parallel evolutionary algorithm based on elite-subspace strategy and space transformation search. In High Performance Computing and Applications. Springer, 139--145.
[64]
M. Dorigo and L. M. Gambardella. 1997. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1, 1 (1997), 53--66.
[65]
J. dos Santos, J. D. de Melo, A. Neto, and D. Aloise. 2014. Reactive search strategies using reinforcement learning, local search algorithms and variable neighborhood search. Expert Syst. Appl. 41, 10 (2014), 4939--4949.
[66]
J. Drake, A. Kheiri, E. Özcan, and E. Burke. 2020. Recent advances in selection hyper-heuristics. Eur. J. Operat. Res. 285, 2 (2020), 405--428.
[67]
M. Drugan. 2019. Reinforcement learning versus evolutionary computation: A survey on hybrid algorithms. Swarm Evol. Comput. 44 (2019), 228--246.
[68]
M. Drugan and E-G. Talbi. 2014. Adaptive Multi-operator MetaHeuristics for quadratic assignment problems. In EVOLVE: A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation V. 149--163.
[69]
G. Duflo, G. Danoy, E-G. Talbi, and P. Bouvry. 2020. Automated design of efficient swarming behaviours: A Q-learning hyper-heuristic approach. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’20). 227--228.
[70]
G. Duflo, G. Danoy, E-G. Talbi, and P. Bouvry. 2020. Automating the Design of Efficient Distributed Behaviours for a Swarm of UAVs. In Proceedings of the IEEE Symposium Series on Computational Intelligence (SSCI’20).
[71]
A. E. Eiben, M. Horvath, W. Kowalczyk, and M. Schut. 2006. Reinforcement learning for online control of evolutionary algorithms. In Proceedings of the International Workshop on Engineering Self-Organising Applications. 151--160.
[72]
A. E. Eiben and S. K. Smit. 2011. Parameter tuning for configuring and analyzing evolutionary algorithms. Swarm Evol. Comput. 1, 1 (2011), 19--31.
[73]
R. S. Engelmore and A. Morgan. 1988. Blackboard Systems. Addison-Wesley.
[74]
S. Erdoğan and E. Miller-Hooks. 2012. A green vehicle routing problem. Transport. Res. E: Logist. Transport. Rev. 48, 1 (2012), 100--114.
[75]
L. Eriksson, E. Johansson, N. Kettaneh-Wold, C. Wikström, and S. Wold. 2000. Design of experiments: Principles and Applications. Learn ways AB, Stockholm.
[76]
A. Fahad, N. Alshatri, Z. Tari, A. Alamri, I. Khalil, A. Zomaya, S. Foufou, and A. Bouras. 2014. A survey of clustering algorithms for big data: Taxonomy and empirical analysis. IEEE Trans. Emerg. Top. Comput. 2, 3 (2014), 267--279.
[77]
C. Fan, B. Hou, J. Zheng, L. Xiao, and L. Yi. 2020. A surrogate-assisted particle swarm optimization using ensemble learning for expensive problems with small sample datasets. Appl. Soft Comput. (2020), 106--142.
[78]
L. Feng, Y. Ong, M. Lim, and I. W. Tsang. 2015. Memetic Search With Interdomain Learning: A Realization Between CVRP and CARP. IEEE Trans. Evol. Comput. 19, 5 (2015), 644--658.
[79]
P. Fournier-Viger, J. Lin, B. Vo, T. Chi, J. Zhang, and H. B. Le. 2017. A survey of itemset mining. Data Min. Knowl. Discov. 7, 4 (2017), e1207.
[80]
M. Gallagher. 2019. Fitness landscape analysis in data-driven optimization: An investigation of clustering problems. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC’19). 2308--2314.
[81]
S. Gao, Y. Wang, J. Cheng, Y. Inazumi, and Z. Tang. 2016. Ant colony optimization with clustering for solving the dynamic location routing problem. Appl. Math. Comput. 285 (2016), 149--173.
[82]
W-F. Gao, S-Y. Liu, and L-L . Huang. 2012. Particle swarm optimization with chaotic opposition-based population initialization and stochastic search technique. Commun. Nonlin. Sci. Numer. Simul. 17, 11 (2012), 4316--4327.
[83]
A. Gaspar-Cunha and A. Vieira. 2004. A hybrid multi-objective evolutionary algorithm using an inverse neural network. In Hybrid Metaheuristics. 25--30.
[84]
C. Gebruers, A. Guerri, B. Hnich, and M. Milano. 2004. Making choices using structure at the instance level within a case based reasoning framework. In Proceedings of the International Conference on Integration of Artificial Intelligence (AI) and Operations Research (OR) Techniques in Constraint Programming. 380--386.
[85]
S. Geetha, G. Poonthalir, and P. Vanathi. 2009. Improved k-means algorithm for capacitated clustering problem. INFOCOMP 8, 4 (2009), 52--59.
[86]
A. Geramifard, T. Walsh, S. Tellex, G. Chowdhary, N. Roy, and J. How. 2013. A tutorial on linear function approximators for dynamic programming and reinforcement learning. Found. Trends Mach. Learn. 6, 4 (2013), 375--451.
[87]
T. Goel, R. Haftka, W. Shyy, and N. Queipo. 2007. Ensemble of surrogates. Struct. Multidiscipl. Optimiz. 33, 3 (2007), 199--216.
[88]
T. Goel, R. Vaidyanathan, R. Haftka, W. Shyy, N. Queipo, and K. Tucker. 2007. Response surface approximation of Pareto optimal front in multi-objective optimization. Comput. Methods Appl. Mech. Eng. 196, 4-6 (2007), 879--893.
[89]
P. Goyal, H. Malik, and R. Sharma. 2019. Application of evolutionary reinforcement learning (erl) approach in control domain: A review. In Smart Innovations in Communication and Computational Sciences. 273--288.
[90]
A. Gunawan, H. Lau, and E. Wong. 2013. Real-world parameter tuning using factorial design with parameter decomposition. In Advances in Metaheuristics. Springer, 37--59.
[91]
H. Guo. 2003. Algorithm Selection for Sorting and Probabilistic Inference: A Machine Learning-based Approach. Ph.D. Dissertation. Kensas State University.
[92]
J. Gupta, R. Sexton, and E. Tunc. 2000. Selecting scheduling heuristics using neural networks. INFORMS J. Comput. 12, 2 (2000), 150--162.
[93]
A. Gutierrez-Rodríguez, S. Conant-Pablos, J. Ortiz-Bayliss, and H. Terashima-Marín. 2019. Selecting meta-heuristics for solving vehicle routing problems with time windows via meta-learning. Expert Syst. Appl. 118 (2019), 470--481.
[94]
A. Habib, H. Singh, T. Chugh, T. Ray, and K. Miettinen. 2019. A multiple surrogate assisted decomposition-based evolutionary algorithm for expensive multi/many-objective optimization. IEEE Trans. Evol. Comput. 23, 6 (2019), 1000--1014.
[95]
L. Han and X. He. 2007. A novel opposition-based particle swarm optimization for noisy problems. In Proceedings of the 3rd IEEE International Conference on Natural Computation (ICNC’07), Vol. 3. 624--629.
[96]
M. He, P. Kalmbach, A. Blenk, W. Kellerer, and S. Schmid. 2017. Algorithm-data driven optimization of adaptive communication networks. In Proceedings of the IEEE 25th International Conference on Network Protocols (ICNP’17). 1--6.
[97]
A. Hebbal, L. Brevault, M. Balesdent, E-G. Talbi, and N. Melab. 2019. Multi-fidelity modeling using DGPs: Improvements and a generalization to varying input space dimensions. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeuIPS’19).
[98]
A. Hebbal, L. Brevault, M. Balesdent, E-G. Talbi, and N. Melab. 2020. Bayesian Optimization using deep Gaussian processes. Eng. Optimiz. (2020), 1--41.
[99]
T-P. Hong, H.-S. Wang, and W-C. Chen. 2000. Simultaneous applying multiple mutation operators in genetic algorithm. J. Heurist. 6, 4 (2000), 439--455.
[100]
Y-S. Hong, H. Lee, and M-J. Tahk. 2003. Acceleration of the convergence speed of evolutionary algorithms using multi-layer neural networks. Eng. Optimiz. 35, 1 (2003), 91--102.
[101]
A. Hottung and K. Tierney. 2020. Neural large neighborhood search for the capacitated vehicle routing problem. In Proceedings of the 24th European Conference on Artificial Intelligence, Frontiers in Artificial Intelligence and Applications,Vol. 325. 443--450.
[102]
P. D. Hough and P. J. Williams. 2006. Modern Machine Learning for Automatic Optimization Algorithm Selection.Technical Report. Sandia National Laboratory (SNL-CA), Livermore, CA.
[103]
X.-M. Hu, F.-L. He, W.-N. Chen, and J. Zhang. 2017. Cooperation coevolution with fast interdependency identification for large scale optimization. Inf. Sci. 381 (2017), 142--160.
[104]
C. Huang, Y. Li, and X. Yao. 2019. A survey of automatic parameter tuning methods for metaheuristics. IEEE Trans. Evol. Comput. 24, 2 (2019), 201--216.
[105]
J. Hunger and G. Huttner. 1999. Optimization and analysis of force field parameters by combination of genetic algorithms and neural networks. J. Comput. Chem. 20, 4 (1999), 455--471.
[106]
F. Hutter, Y. Hamadi, H. Hoos, and K. Leyton-Brown. 2006. Performance prediction and automated tuning of randomized and parametric algorithms. In Proceedings of the International Conference on Principles and Practice of Constraint Programming. Springer, 213--228.
[107]
F. Hutter, H. Hoos, and K. Leyton-Brown. 2011. Sequential model-based optimization for general algorithm configuration. In Proceedings of the International Conference on Learning and Intelligent Optimization. 507--523.
[108]
F. Hutter, H. Hoos, K. Leyton-Brown, and T. Stützle. 2009. ParamILS: An automatic algorithm configuration framework. J. Artif. Intell. Res. 36 (2009), 267--306.
[109]
F. Hutter, L. Xu, H. Hoos, and K. Leyton-Brown. 2014. Algorithm runtime prediction: Methods & evaluation. Artif. Intell. 206 (2014), 79--111.
[110]
J. James, W. Yu, and J. Gu. 2019. Online vehicle routing with neural combinatorial optimization and deep reinforcement learning. IEEE Trans. Intell. Transport. Syst. 20, 10 (2019), 3806--3817.
[111]
M. Jiang, Z. Huang, L. Qiu, W. Huang, and G. G. Yen. 2018. Transfer learning-based dynamic multiobjective optimization algorithms. IEEE Trans. Evol. Comput. 22, 4 (2018), 501--514.
[112]
X. Jiang, D. Chafekar, and K. Rasheed. 2003. Constrained multi-objective ga optimization using reduced models. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’03). 174--177.
[113]
X. Jin and R. Reynolds. 1999. Using knowledge-based evolutionary computation to solve nonlinear constraint optimization problems: A cultural algorithm approach. In Proceedings of the 1999 Congress on Evolutionary Computation(CEC’99), Vol. 3. 1672--1678.
[114]
Y. Jin. 2005. A comprehensive survey of fitness approximation in evolutionary computation. Soft Comput. 9, 1 (2005), 3--12.
[115]
Y. Jin, M. Olhofer, and B. Sendhoff. 2002. A framework for evolutionary optimization with approximate fitness functions. IEEE Trans. Evol. Comput. 6, 5 (2002), 481--494.
[116]
Y. Jin and B. Sendhoff. 2004. Reducing fitness evaluations using clustering techniques and neural network ensembles. In Proceedings of the Genetic and Evolutionary Computation (GECCO’04), Lecture Notes in Computer Science, Vol. 3102. Springer, 688--699.
[117]
D. Jones, M. Schonlau, and W. J. Welch. 1998. Efficient global optimization of expensive black-box functions. J. Global Optimiz. 13, 4 (1998), 455--492.
[118]
K. De Jong. 2007. Parameter setting in EAs: A 30 year perspective. In Parameter Setting in Evolutionary Algorithms. 1--18.
[119]
L. Jourdan, D. Corne, D. Savic, and G. Walters. 2005. Preliminary investigation of the learnable evolution model for faster/better multiobjective water systems design. In Proceedings of the International Conference on Evolutionary Multi-Criterion Optimization. 841--855.
[120]
L. Jourdan, C. Dhaenens, and E-G. Talbi. 2006. Using data mining techniques to help metaheuristics: A short survey. In Proceedings of the Annual Conference on Hybrid Metaheuristics (HM’06),Lecture Notes in Computer Science, Vol. 4030. Gran Canaria, Spain, 57--69.
[121]
M. Kalakrishnan, P. Pastor, L. Righetti, and S. Schaal. 2013. Learning objective functions for manipulation. In Proceedings of the IEEE International Conference on Robotics and Automation. 1331--1336.
[122]
B. Kazimipour, X. Li, and A. Qin. 2014. A review of population initialization techniques for evolutionary algorithms. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC’14). 2585--2592.
[123]
P. Kerschke and H. Trautmann. 2019. Automated algorithm selection on continuous black-box problems by combining exploratory landscape analysis and machine learning. Evol. Comput. 27, 1 (2019), 99--127.
[124]
H. Khadilkar. 2018. A scalable reinforcement learning algorithm for scheduling railway lines. IEEE Trans. Intell. Transport. Syst.99 (2018), 1--11.
[125]
E. Khalil, H. Dai, Y. Zhang, B. Dilkina, and L. Song. 2017. Learning combinatorial optimization algorithms over graphs. In Advances in Neural Information Processing Systems. 6348--6358.
[126]
E. Kieffer, G. Danoy, M. Brust, P. Bouvry, and A. Nagih. 2019. Tackling large-scale and combinatorial bi-level problems with a genetic programming hyper-heuristic. IEEE Trans. Evol. Comput. (2019).
[127]
H.-S. Kim and S.-B. Cho. 2001. An efficient genetic algorithm with less fitness evaluation by clustering. In Proceedings of the Congress on Evolutionary Computation (CEC’01). IEEE Press, 887--894.
[128]
J. Knowles. 2006. ParEGO: A hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. IEEE Trans. Evol. Comput. 10, 1 (2006), 50--66.
[129]
L. Kotthoff. 2016. Algorithm selection for combinatorial search problems: A survey. In Data Mining and Constraint Programming. Springer, 149--190.
[130]
P. Laborie and D. Godard. 2007. Self-adapting large neighborhood search: Application to single-mode scheduling problems. Proceedings of the Multidisciplinary International Scheduling Conference: Theory and Applications (MISTA’07).
[131]
W. Laesanklang and D. Landa-Silva. 2017. Decomposition techniques with mixed integer programming and heuristics for home healthcare planning. Ann. Operat. Res. 256, 1 (2017), 93--127.
[132]
H. C. Lau and F. Xiao. 2009. Enhancing the speed and accuracy of automated parameter tuning in heuristic design. (2009).
[133]
H. M. Lee, D. Jung, A. Sadollah, and J. H. Kim. 2019. Performance comparison of metaheuristic algorithms using a modified Gaussian fitness landscape generator. Soft Comput. (2019), 1--11.
[134]
S. Lessmann, M. Caserta, and I. M. Arango. 2011. Tuning metaheuristics: A data mining based approach for particle swarm optimization. Expert Syst. Appl. 38, 10 (2011), 12826--12838.
[135]
S. W. Leung, X. Zhang, and S. Y. Yuen. 2012. Multiobjective differential evolution algorithm with opposition-based parameter control. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC’12). 1--8.
[136]
Y.-W. Leung and Y. Wang. 2001. An orthogonal genetic algorithm with quantization for global numerical optimization. IEEE Trans. Evol. Comput. 5, 1 (2001), 41--53.
[137]
K. Leyton-Brown, E. Nudelman, and Y. Shoham. 2002. Learning the empirical hardness of optimization problems: The case of combinatorial auctions. In Proceedings of the International Conference on Principles and Practice of Constraint Programming. 556--572.
[138]
K. Leyton-Brown, E. Nudelman, and Y. Shoham. 2009. Empirical hardness models: Methodology and a case study on combinatorial auctions. J. ACM 56, 4 (2009), 1--52.
[139]
X. Li and S. Olafsson. 2005. Discovering dispatching rules using data mining. J. Schedul. 8, 6 (2005), 515--527.
[140]
L. Liu and M. Dessouky. 2017. A decomposition based hybrid heuristic algorithm for the joint passenger and freight train scheduling problem. Comput. Operat. Res. 87 (2017), 165--182.
[141]
N. Liu, J.-S. Pan, C. Sun, and S.-C. Chu. 2020. An efficient surrogate-assisted quasi-affine transformation evolutionary algorithm for expensive optimization problems. Knowl.-Based Syst. 209 (2020).
[142]
Z. Liu and F. Forouzanfar. 2018. Ensemble clustering for efficient robust optimization of naturally fractured reservoirs. Comput. Geosci. 22, 1 (2018), 283--296.
[143]
I. Loshchilov, M. Schoenauer, and M. Sebag. 2012. Self-adaptive surrogate-assisted covariance matrix adaptation evolution strategy. In Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation. 321--328.
[144]
S. Louis and J. McDonnell. 2004. Learning with case-injected genetic algorithms. IEEE Trans. Evol. Comput. 8, 4 (2004), 316--328.
[145]
X. Ma, F. Liu, Y. Qi, M. Gong, M. Yin, L. Li, L. Jiao, and J. Wu. 2014. MOEA/D with opposition-based learning for multiobjective optimization problem. Neurocomputing 146 (2014), 48--64.
[146]
S. Mahdavi, S. Rahnamayan, and M. E. Shiri. 2018. Incremental cooperative coevolution for large-scale global optimization. Soft Comput. 22, 6 (2018), 2045--2064.
[147]
R. Mallipeddi and P. N. Suganthan. 2010. Ensemble of constraint handling techniques. IEEE Trans. Evol. Comput. 14, 4 (2010), 561--579.
[148]
H. Mao, M. Alizadeh, I. Menache, and S. Kandula. 2016. Resource management with deep reinforcement learning. In Proceedings of the 15th ACM Workshop on Hot Topics in Networks. 50--56.
[149]
N. Mazyavkina, S. Sviridov, S. Ivanov, and E. Burnaev. 2020. Reinforcement learning for combinatorial optimization: A survey. arXiv:2003.03600. Retrieved from https://arxiv.org/abs/2003.03600.
[150]
S. Meisel and D. C. Mattfeld. 2007. Synergies of data mining and operations research. In Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS’07). 56--56.
[151]
O. Mersmann, B. Bischl, H. Trautmann, M. Preuss, C. Weihs, and G. Rudolph. 2011. Exploratory landscape analysis. In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation. 829--836.
[152]
R. S. Michalski. 2000. Learnable evolution model: Evolutionary processes guided by machine learning. Mach. Learn. 38, 1 (2000), 9--40.
[153]
E. Mihaela and I. Adrian. 2015. Dynamic Objective Sampling in Many-objective Optimization. Proc. Comput. Sci. 60 (2015), 178--187.
[154]
S. Miki, D. Yamamoto, and H. Ebara. 2018. Applying Deep Learning and Reinforcement Learning to Traveling Salesman Problem. In Proceedings of the International Conference on Computing, Electronics & Communications Engineering (iCCECE’18). 65--70.
[155]
M. Mısır, S. Handoko, and H. C. Lau. 2015. OSCAR: Online selection of algorithm portfolios with case study on memetic algorithms. In Proceedings of the International Conference on Learning and Intelligent Optimization. 59--73.
[156]
A. Modares, S. Somhom, and T. Enkawa. 1999. A self-organizing neural network approach for multiple traveling salesman and vehicle routing problems. Int. Trans. Operat. Res. 6, 6 (1999), 591--606.
[157]
J. Mueller and J. Woodbury. 2017. GOSAC: Global optimization with surrogate approimation of constraints. J. Global Optimiz. 69 (01 2017).
[158]
A. Nareyek. 2003. Choosing search heuristics by non-stationary reinforcement learning. In Metaheuristics: Computer Decision-making. Springer, 523--544.
[159]
M. M. Nasiri, S. Salesi, A. Rahbari, N. S. Meydani, and M. Abdollai. 2018. A data mining approach for population-based methods to solve the JSSP. Soft Comput. (2018), 1--16.
[160]
M. Nazari, A. Oroojlooy, L. Snyder, and M. Takác. 2018. Reinforcement learning for solving the vehicle routing problem. In Advances in Neural Information Processing Systems. 9839--9849.
[161]
R. S. Niculescu, T. Mitchell, and R. B. Rao. 2006. Bayesian network learning with parameter constraints. J. Mach. Learn. Res. 7(Jul.2006), 1357--1383.
[162]
K. Nitisiri, M. Gen, and H. Ohwada. 2019. A parallel multi-objective genetic algorithm with learning based mutation for railway scheduling. Comput. Industr. Eng. 130 (2019), 381--394.
[163]
J. Oliveira, M. Almeida, R. Santos, R. de Gusmão, and A. Britto. 2020. New surrogate approaches applied to meta-heuristic algorithms. In Proceedings of the International Conference on Artificial Intelligence and Soft Computing. 400--411.
[164]
J. Ortiz-Bayliss, H. Terashima-Marín, and S. Conant-Pablos. 2013. A supervised learning approach to construct hyper-heuristics for constraint satisfaction. In Proceedings of the Mexican Conference on Pattern Recognition. 284--293.
[165]
J. Ortiz-Bayliss, H. Terashima-Marín, and S. Conant-Pablos. 2013. Using learning classifier systems to design selective hyper-heuristics for constraint satisfaction problems. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation (CEC’13). 2618--2625.
[166]
J. Ortiz-Bayliss, H. Terashima-Marín, P. Ross, and S. Conant-Pablos. 2011. Evolution of neural networks topologies and learning parameters to produce hyper-heuristics for constraint satisfaction problems. In Proceedings of the 13th Annual Conference Companion on Genetic and Evolutionary Computation. 261--262.
[167]
A. Ostertag, K. Doerner, and R. Hartl. 2008. A variable neighborhood search integrated in the POPMUSIC framework for solving large scale vehicle routing problems. In Proceedings of the International Workshop on Hybrid Metaheuristics. 29--42.
[168]
E. Özcan, M. Misir, G. Ochoa, and E. Burke. 2012. A reinforcement learning: Great-deluge hyper-heuristic for examination timetabling. In Modeling, Analysis, and Applications in Metaheuristic Computing: Advancements and Trends. 34--55.
[169]
M. Pal, S. Saha, and S. Bandyopadhyay. 2018. DECOR: Differential evolution using clustering based objective reduction for many-objective optimization. Inf. Sci. 423 (2018), 200--218.
[170]
M. Pant, M. Ali, and V. Singh. 2009. Differential evolution using quadratic interpolation for initializing the population. In Proceedings of the IEEE International Advance Computing Conference. 375--380.
[171]
S.-Y. Park and J.-J. Lee. 2009. Improvement of a multi-objective differential evolution using clustering algorithm. In Proceedings of the IEEE International Symposium on Industrial Electronics. 1213--1217.
[172]
J. M. Parr, C. M. E. Holden, A. I. J. Forrester, and A. J. Keane. 2010. Review of efficient surrogate infill sampling criteria with constraint handling.
[173]
R. Patterson, E. Rolland, and H. Pirkul. 1999. A memory adaptive reasoning technique for solving the capacitated minimum spanning tree problem. J. Heurist. 5 (1999), 159--180.
[174]
R. Pavón, F. Díaz, R. Laza, and M. V. Luzón. 2009. Automatic parameter tuning with a Bayesian case-based reasoning system. A case of study. Expert Syst. Appl. 36, 2 (2009), 3407--3420.
[175]
Y. Pei. 2020. Trends on fitness landscape analysis in evolutionary computation and meta-heuristics. In Frontier Applications of Nature Inspired Computation. Springer, 78--99.
[176]
J. Pelamatti, L. Brevault, M. Balesdent, E-G. Talbi, and Y. Guerin. 2019. Efficient global optimization of constrained mixed variable problems. J. Global Optimiz. 73, 3 (2019), 583--613.
[177]
M. Pelikan, D. Goldberg, and F. Lobo. 2002. A survey of optimization by building and using probabilistic models. Comput. Optimiz. Appl. 21, 1 (2002), 5--20.
[178]
M. Pelikan and K. Sastry. 2004. Fitness inheritance in the Bayesian optimization algorithm. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’04). 48--59.
[179]
M. Pelikan, K. Sastry, and D. Goldberg. 2005. Multiobjective hBOA, clustering, and scalability. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’05).
[180]
J. Pena, J. Lozano, and P. Larranaga. 2005. Globally multimodal problem optimization via an estimation of distribution algorithm based on unsupervised learning of Bayesian networks. Evol. Comput. 13 (03 2005), 43--66.
[181]
M. Pelikan, D. Goldberg, and E. Cantu-Paz. 2000. Linkage problem, distribution estimation, and Bayesian networks. Evolutionary Computation 8, 3 (2000), 311--340.
[182]
A. Polydoros and L. Nalpantidis. 2017. Survey of model-based reinforcement learning: Applications on robotics. J. Intell. Robot. Syst. 86, 2 (2017), 153--173.
[183]
D. Porumbel, J-K. Hao, and P. Kuntz. 2010. A search space cartography for guiding graph coloring heuristics. Comput. Operat. Res. 37, 4 (2010), 769--778.
[184]
J.-Y. Potvin and R. S. Thangiah. 2020. Vehicle routing through simulation. Fusion of Neural Networks, Fuzzy Systems and Genetic Algorithms: Industrial Applications (2020).
[185]
W. B. Powell. 2007. Approximate Dynamic Programming: Solving the Curses of Dimensionality. Vol. 703. John Wiley & Sons.
[186]
R. Priem, N. Bartoli, and Y. Diouane. 2019. On the use of upper trust bounds in constrained Bayesian optimization infill criteria. In Proceedings of the AIAA Aviation Forum. 2986--2999.
[187]
P. Priore, D. de la Fuente, J. Puente, and J. Parreño. 2006. A comparison of machine-learning algorithms for dynamic scheduling of flexible manufacturing systems. Eng. Appl. Artif. Intell. 19, 3 (2006), 247--255.
[188]
L. Pulina and A. Tacchella. 2007. A multi-engine solver for quantified boolean formulas. In Proceedings of the International Conference on Principles and Practice of Constraint Programming. 574--589.
[189]
S. Qin, C. Sun, Y. Jin, and G. Zhang. 2019. Bayesian approaches to surrogate-assisted evolutionary multi-objective optimization: A comparative study. In Proceedings of the IEEE Symposium Series on Computational Intelligence (SSCI’19). 2074--2080.
[190]
A. Rahati and H. Rakhshani. 2016. A gene expression programming framework for evolutionary design of metaheuristic algorithms. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC’16). 1445--1452.
[191]
S. Rahnamayan, H. Tizhoosh, and M. Salama. 2008. Opposition-based differential evolution. IEEE Trans. Evol. Comput. 12, 1 (2008), 64--79.
[192]
I. Ramos, M. C. Goldbarg, E. Goldbarg, and A. D. Neto. 2005. Logistic regression for parameter tuning on an evolutionary algorithm. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC’05), Vol. 2. 1061--1068.
[193]
K. Rasheed and H. Hirsh. 2000. Informed operators: Speeding up genetic-algorithm-based design optimization using reduced models. In Proceedings of the 2nd Annual Conference on Genetic and Evolutionary Computation (GECCO’00). 628--635.
[194]
K. Rasheed, S. Vattam, and X. Ni. 2002. Comparison of methods for developing dynamic reduced models for design optimization. In Proceedings of the Congress on Evolutionary Computation (CEC’02). 390--395.
[195]
J. Rasku, T. Kärkkäinen, and N. Musliu. 2016. Feature extractors for describing vehicle routing problem instances. In Proceedings of the 5th Student Conference on Operational Research (SCOR’16), B. Hardy, A. Qazi, and S. Ravizza (Eds.), Vol. 50. Schloss Dagstuhl-Leibniz-Zentrum für Informatik.
[196]
R. G. Regis. 2014. Evolutionary programming for high-dimensional constrained expensive black-box optimization using radial basis functions. IEEE Trans. Evol. Comput. 18, 3 (2014), 326--347.
[197]
R. G. Regis and C. A. Shoemaker. 2004. Local function approximation in evolutionary algorithms for the optimization of costly functions. IEEE Trans. Evol. Comput. 8, 5 (2004), 490--505.
[198]
M. Reimann, K. Doerner, and R. Hartl. 2004. D-ants: Savings based ants divide and conquer the vehicle routing problem. Computers & Operations Research 31 (04 2004), 563--591.
[199]
R. G. Reynolds, Z. Michalewicz, and B. Peng. 2005. Cultural algorithms: Computational modeling of how cultures learn to solve problems- an engineering example. Cybernet. Syst. 36, 8 (2005), 753--771.
[200]
M. H. Ribeiro, A. Plastino, and S. L. Martins. 2006. Hybridization of GRASP metaheuristic with data mining techniques. J. Math. Model. Algor. 5, 1 (2006), 23--41.
[201]
M. H. Ribeiro, V. Trindade, A. Plastino, and S. L. Martins. 2004. Hybridization of GRASP metaheuristics with data mining techniques. In Hybrid Metaheuristics.
[202]
J. R. Rice. 1976. The algorithm selection problem. Adv. Comput. 15 (1976), 65--118.
[203]
N. Rojas-Morales, M.-C. Rojas, and E. M. Ureta. 2017. A survey and classification of opposition-based metaheuristics. Comput. Industr. Eng. 110 (2017), 424--435.
[204]
R. G. Rommel. 2014. Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points. Eng. Optimiz. 46, 2 (2014), 218--243.
[205]
T. P. Runarsson. 2011. Learning heuristic policies - A reinforcement learning problem. In Proceedings of the International Conference on Learning and Intelligent Optimization. 423--432.
[206]
N. Sabar, M. Ayob, G. Kendall, and R. Qu. 2015. Automatic design of a hyper-heuristic framework with gene expression programming for combinatorial optimization problems.IEEE Trans. Evol. Comput. 19, 3 (2015), 309--325.
[207]
N. R. Sabar, M. Ayob, G. Kendall, and R. Qu. 2014. Automatic design of a hyper-heuristic framework with gene expression programming for combinatorial optimization problems. IEEE Trans. Evol. Comput. 19, 3 (2014), 309--325.
[208]
N. Sakamoto, E. Semmatsu, K. Fukuchi, J. Sakuma, and Y. Akimoto. 2020. Deep generative model for non-convex constraint handling. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’20). 636--644.
[209]
M. Samorani, Y. Wang, Z. Lv, and F. Glover. 2019. Clustering-driven evolutionary algorithms: An application of path relinking to the quadratic unconstrained binary optimization problem. J. Heurist. 25, 4-5 (2019), 629--642.
[210]
H. G. Santos, L. S. Ochi, E. H. Marinho, and L. M. Drummond. 2006. Combining an evolutionary algorithm with data mining to solve a single-vehicle routing problem. Neurocomputing 70, 1-3 (2006), 70--77.
[211]
S. Sapre and S. Mini. 2019. Opposition-based moth flame optimization with Cauchy mutation and evolutionary boundary constraint handling for global optimization. Soft Comput. 23, 15 (2019), 6023--6041.
[212]
K. Sastry and D. Goldberg. 2004. Designing competent mutation operators via probabilistic model building of neighborhoods. In Proceedings of the Genetic and Evolutionary Computation (GECCO’04), K. Deb (Ed.). 114--125.
[213]
K. Sastry, D. Goldberg, and M. Pelikan. 2001. Don’t evaluate, inherit. In Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation (GECCO’01). 551--558.
[214]
D. Saxena, J. Duro, A. Tiwari, K. Deb, and Q. Zhang. 2013. Objective reduction in many-objective optimization: Linear and nonlinear algorithms. IEEE Trans. Evol. Comp 17, 1 (2013), 77--99.
[215]
M. Sebag, M. Schoenauer, and C. Ravise. 1997. Inductive learning of mutation step-size in evolutionary parameter optimization. In Evolutionary Programming.
[216]
A. Shahzad and N. Mebarki. 2012. Data mining based job dispatching using hybrid simulation-optimization approach for shop scheduling problem. Eng. Appl. Artif. Intell. 25, 6 (2012), 1173--1181.
[217]
S. Shakya and J. McCall. 2007. Optimization by estimation of distribution with DEUM framework based on Markov random fields. Int. J. Automat. Comput. 4, 3 (2007), 262--272.
[218]
C. Shang and F. You. 2019. A data-driven robust optimization approach to scenario-based stochastic model predictive control. J. Process Contr. 75 (2019), 24--39.
[219]
G. Shankar and V. Mukherjee. 2016. Quasi oppositional harmony search algorithm based controller tuning for load frequency control of multi-source multi-area power system. Int. J. Electr. Power Energy Syst. 75 (2016), 289--302.
[220]
L. Shi and K. Rasheed. 2008. ASAGA: An adaptive surrogate-assisted genetic algorithm. In Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation. 1049--1056.
[221]
C. K. Shiva, G. Shankar, and V. Mukherjee. 2015. Automatic generation control of power system using a novel quasi-oppositional harmony search algorithm. Int. J. Electr. Power Energy Syst. 73 (2015), 787--804.
[222]
Y. Shiyou, Q. Liu, J. Lu, S. L. Ho, G. Ni, P. Ni, and S. Xiong. 2009. Application of support vector machines to accelerate the solution speed of metaheuristic algorithms. IEEE Trans. Magn. 45 (04 2009), 1502--1505.
[223]
T. Si, A. De, and A. K. Bhattacharjee. 2014. Particle swarm optimization with generalized opposition based learning in particle’s pbest position. In Proceedings of the International Conference on Circuits, Power and Computing Technologies (ICCPCT’14). 1662--1667.
[224]
R. Smith, B. Dike, and S. A. Stegmann. 1995. Fitness inheritance in genetic algorithms. In Proceedings of the ACM Symposium on Applied Computing. 345--350.
[225]
K. Smith-Miles and J. van Hemert. 2011. Discovering the suitability of optimisation algorithms by learning from evolved instances. Ann. Math. Artif. Intell. 61, 2 (2011), 87--104.
[226]
K. A. Smith-Miles. 2009. Cross-disciplinary perspectives on meta-learning for algorithm selection. ACM Comput. Surv. 41, 1 (2009), 1--25.
[227]
K. Socha and M. Dorigo. 2008. Ant colony optimization for continuous domains. Eur. J. Operat. Res. 185, 3 (2008), 1155--1173.
[228]
C. Soza, R. Becerra, M. C. Riff, and C. Coello. 2011. Solving timetabling problems using a cultural algorithm. Appl. Soft Comput. 11, 1 (2011), 337--344.
[229]
S. Sra, S. Nowozin, and S. Wright. 2012. Optimization for Machine Learning. MIT Press.
[230]
S. Srivastava, B. Pathak, and K. Srivastava. 2008. Neural network embedded multiobjective genetic algorithm to solve non-linear time-cost tradeoff problems of project scheduling. J. Sci. Industr. Res. 67 (2008), 124--131.
[231]
J. Sun, H. Zhang, A. Zhou, Q. Zhang, and K. Zhang. 2019. A new learning-based adaptive multi-objective evolutionary algorithm. Swarm Evol. Comput. 44 (2019), 304--319.
[232]
R. Sutton, A. Barto, et al. 1998. Introduction to Reinforcement Learning. Vol. 135.
[233]
R. S. Sutton. 1988. Learning to predict by the methods of temporal differences. Mach. Learn. 3 (1988), 9--44.
[234]
E.-G. Talbi. 2009. Metaheuristics: From Design to Implementation. Wiley.
[235]
E.-G. Talbi. 2016. Combining metaheuristics with mathematical programming, constraint programming and machine learning. Ann. Operat. Res. 240, 1 (2016), 171--215.
[236]
K. C. Tan, H. Tang, and S. S. Ge. 2005. On parameter settings of Hopfield networks applied to traveling salesman problems. IEEE Trans. Circ. Syst. I 52, 5 (2005), 994--1002.
[237]
J. Tao and G. Sun. 2019. Application of deep learning based multi-fidelity surrogate model to robust aerodynamic design optimization. Aerosp. Sci. Technol. 92 (2019), 722--737.
[238]
Y. Tenne and C.-K. Goh. 2010. Computational Intelligence in Expensive Optimization Problems. Vol. 2. Springer Science & Business Media.
[239]
S. Thevenin and N. Zufferey. 2019. Learning Variable Neighborhood Search for a scheduling problem with time windows and rejections. Discr. Appl. Math. 261 (2019), 344--353.
[240]
D. Thierens. 2005. An adaptive pursuit strategy for allocating operator probabilities. In Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation (GECCO’05). 1539--1546.
[241]
D. Thierens and P. Bosman. 2012. Learning the Neighborhood with the Linkage Tree Genetic Algorithm. In Learning and Intelligent Optimization, Y. Hamadi and M. Schoenauer (Eds.). 491--496.
[242]
H. Tizhoosh, M. Ventresca, and S. Rahnamayan. 2008. Opposition-based computing. In Oppositional Concepts in Computational Intelligence. 11--28.
[243]
A. Tuson and P. Ross. 1998. Adapting operator settings in genetic algorithms. Evol. Comput. 6, 2 (1998), 161--184.
[244]
D. Tuzun, M. A. Magent, and L. I. Burke. 1997. Selection of vehicle routing heuristic using neural networks. Int. Trans. Operat. Res. 4, 3 (1997), 211--221.
[245]
R. Tyasnurita, E. Özcan, and R. John. 2017. Learning heuristic selection using a time delay neural network for open vehicle routing. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC’17). 1474--1481.
[246]
H. Ulmer, F. Streichert, and A. Zell. 2003. Evolution strategies assisted by Gaussian processes with improved preselection criterion. In Proceedings of the Congress on Evolutionary Computation (CEC’03). 692--699.
[247]
F. Vanderbeck and L. A. Wolsey. 2010. Reformulation and decomposition of integer programs. In 50 Years of Integer Programming 1958-2008. 431--502.
[248]
J. Vermorel and M. Mohri. 2005. Multi-armed bandit algorithms and empirical evaluation. In Proceedings of the European Conference on Machine Learning (ECML’05). 437--448.
[249]
O. Vinyals, M. Fortunato, and N. Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems. 2692--2700.
[250]
C. Voudouris and E. Tsang. 2003. Guided local search. In Handbook of Metaheuristics. 185--218.
[251]
G. Wang and S. Shan. 2007. Review of metamodeling techniques in support of engineering design optimization. J. Mech. Des. 129, 4 (2007), 370--380.
[252]
H. Wang, H. Li and Y. Liu, C. Li, and S. Zeng. 2007. Opposition-based particle swarm algorithm with Cauchy mutation. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC’07). 4750--4756.
[253]
H. Wang, Z. Wu, S. Rahnamayan, Y. Liu, and M. Ventresca. 2011. Enhancing particle swarm optimization using generalized opposition-based learning. Inf. Sci. 181, 20 (2011), 4699--4714.
[254]
J. Wang. 2015. Enhanced differential evolution with generalised opposition--based learning and orientation neighbourhood mining. Int. J. Comput. Sci. Math. 6, 1 (2015), 49--58.
[255]
X. Wang, Y. Jin, S. Schmitt, and M. Olhofer. 2020. An adaptive Bayesian approach to surrogate-assisted evolutionary multi-objective optimization. Inf. Sci. 519 (2020), 317--331.
[256]
B. Waschneck, A. Reichstaller, L. Belzner, T. Altenmüller, T. Bauernhansl, A. Knapp, and A. Kyek. 2018. Optimization of global production scheduling with deep reinforcement learning. Proc. CIRP 72 (2018), 1264--1269.
[257]
C. Watkins and P. Dayan. 1992. Q-learning. Mach. Learn. 8, 3-4 (1992), 279--292.
[258]
W. J. Welch and M. Schonlau. 1997. Computer experiments and global optimization.
[259]
U.-P. Wen, K.-M. Lan, and H.-S. Shih. 2009. A review of Hopfield neural networks for solving mathematical programming problems. Eur. J. Oper. Res. 198, 3 (2009), 675--687.
[260]
M. Wistuba, N. Schilling, and L. Schmidt-Thieme. 2018. Scalable Gaussian process-based transfer surrogates for hyperparameter optimization. Mach. Learn. 107, 1 (2018), 43--78.
[261]
D. Wolpert and W. Macready. 1997. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1, 1 (1997), 67--82.
[262]
G. Wu, R. Mallipeddi, and P. N. Suganthan. 2019. Ensemble strategies for population-based optimization algorithms--A survey. Swarm Evol. Comput. 44 (2019), 695--711.
[263]
M. Wu, K. Li, S. Kwong, Q. Zhang, and J. Zhang. 2018. Learning to decompose: A paradigm for decomposition-based multiobjective optimization. IEEE Trans. Evol. Comput. 23, 3 (2018), 376--390.
[264]
L. Xu, F. Hutter, H. Hoos, and K. Leyton-Brown. 2008. SATzilla: Portfolio-based algorithm selection for SAT. J. Artif. Intell. Res. 32 (2008), 565--606.
[265]
Y. Xu, D. Stern, and H. Samulowitz. 2009. Learning adaptation to solve constraint satisfaction problems. In Proceedings of the International Conference on Learning and Intelligent Optimization (LION’09).
[266]
T. Yalcinoz, B. J. Cory, and M. J. Short. 2001. Hopfield neural network approaches to economic dispatch problems. International Journal of Electrical Power & Energy Systems 23 (2001), 435--442.
[267]
C. Yang, J. Ding, Y. Jin, and T. Chai. 2019. Offline data-driven multiobjective optimization: Knowledge transfer between surrogates and generation of final solutions. IEEE Trans. Evol. Comput. 24, 3 (2019), 409--423.
[268]
S. Yazdani and J. Shanbehzadeh. 2015. Balanced cartesian genetic programming via migration and opposition-based learning: Application to symbolic regression. Genet. Program. Evolv. Mach. 16, 2 (2015), 133--150.
[269]
J. Yi, Y. Shen, and C. Shoemaker. 2020. A multi-fidelity RBF surrogate-based optimization framework for computationally expensive multi-modal problems with application to capacity planning of manufacturing systems. Struct. Multidisc. Optimiz. (2020), 1--21.
[270]
E. Yolcu and B. Poczos. 2019. Learning local search heuristics for boolean satisfiability. In Advances in Neural Information Processing Systems. 7990--8001.
[271]
S.-H. Yoo and S.-B. Cho. 2004. Partially evaluated genetic algorithm based on fuzzy c-means algorithm. In Proceedings of the International Conference on Parallel Problem Solving from Nature. 440--449.
[272]
S. Yu, A. Aleti, J. C. Barca, and A. Song. 2018. Hyper-heuristic online learning for self-assembling swarm robots. In Proceedings of the International Conference on Computational Science. 167--180.
[273]
S. Y. Yuen and C. K. Chow. 2009. A genetic algorithm that adaptively mutates and never revisits. IEEE Trans. Evol. Comput. 13, 2 (2009), 454--472.
[274]
M. Zennaki and A. Ech-Cherif. 2010. A new machine learning based approach for tuning metaheuristics for the solution of hard combinatorial optimization problems. J. Appl. Sci. 10, 18 (2010), 1991--2000.
[275]
D. Zhan, Y. Cheng, and J. Liu. 2017. Expected improvement matrix-based infill criteria for expensive multiobjective optimization. IEEE Trans. Evol. Comput. 21, 6 (2017), 956--975.
[276]
H. Zhang and J. Lu. 2008. Adaptive evolutionary programming based on reinforcement learning. Inf. Sci. 178, 4 (2008), 971--984.
[277]
J. Zhang, H. Chung, and W.-L. Lo. 2007. Clustering-based adaptive crossover and mutation probabilities for genetic algorithms. IEEE Trans. Evol. Comput. 11, 3 (2007), 326--335.
[278]
J. Zhang, Y.-S. Yim, and J. Yang. 1997. Intelligent selection of instances for prediction functions in lazy learning algorithms. In Lazy Learning. Springer, 175--191.
[279]
J. Zhang, Z.-H. Zhan, Y. Lin, N. Chen, Y.-J. Gong, J.-H. Zhong, H. Chung, Y. Li, and Y.-H. Shi. 2011. Evolutionary computation meets machine learning: A survey. IEEE Comput. Intell. Mag. 6, 4 (2011), 68--75.
[280]
K.-S. Zhang, Z.-H. Han, Z.-J. Gao, and Y. Wang. 2019. Constraint aggregation for large number of constraints in wing surrogate-based optimization. Struct. Multidisc. Optimiz. 59, 2 (2019), 421--438.
[281]
R. Zhang, A. Prokhorchuk, and J. Dauwels. 2020. Deep reinforcement learning for traveling salesman problem with time windows and rejections. In Proceedings of the International Joint Conference on Neural Networks (IJCNN’20). 1--8.
[282]
W. Zhang and T. Dietterich. 1995. A reinforcement learning approach to job-shop scheduling. In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI’95), Vol. 95. 1114--1120.
[283]
J. Zhong, L. Feng, and Y.-S. Ong. 2017. Gene expression programming: A survey. IEEE Computat. Intell. Mag. 12, 3 (2017), 54--72.
[284]
Z. Zhou, Y. Ong, N. Hanh, and D. Lim. 2005. A study on polynomial regression and Gaussian process global surrogate model in hierarchical surrogate-assisted evolutionary algorithm. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC’05), 2832--2839.

Cited By

View all
  • (2025)Deep reinforcement learning assisted surrogate model management for expensive constrained multi-objective optimizationSwarm and Evolutionary Computation10.1016/j.swevo.2024.10181792(101817)Online publication date: Feb-2025
  • (2025)Learning memetic algorithm based on variable population and neighborhood for multi-complex target scheduling of large-scale imaging satellitesSwarm and Evolutionary Computation10.1016/j.swevo.2024.10178992(101789)Online publication date: Feb-2025
  • (2025)A systematic review of permutation flow shop scheduling with due-date-related objectivesComputers & Operations Research10.1016/j.cor.2025.106989177(106989)Online publication date: May-2025
  • Show More Cited By

Index Terms

  1. Machine Learning into Metaheuristics: A Survey and Taxonomy

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Computing Surveys
    ACM Computing Surveys  Volume 54, Issue 6
    Invited Tutorial
    July 2022
    799 pages
    ISSN:0360-0300
    EISSN:1557-7341
    DOI:10.1145/3475936
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 July 2021
    Accepted: 01 March 2021
    Revised: 01 February 2021
    Received: 01 March 2020
    Published in CSUR Volume 54, Issue 6

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. ML-supported metaheuristics
    2. Metaheuristics
    3. machine learning
    4. optimization

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)439
    • Downloads (Last 6 weeks)64
    Reflects downloads up to 07 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Deep reinforcement learning assisted surrogate model management for expensive constrained multi-objective optimizationSwarm and Evolutionary Computation10.1016/j.swevo.2024.10181792(101817)Online publication date: Feb-2025
    • (2025)Learning memetic algorithm based on variable population and neighborhood for multi-complex target scheduling of large-scale imaging satellitesSwarm and Evolutionary Computation10.1016/j.swevo.2024.10178992(101789)Online publication date: Feb-2025
    • (2025)A systematic review of permutation flow shop scheduling with due-date-related objectivesComputers & Operations Research10.1016/j.cor.2025.106989177(106989)Online publication date: May-2025
    • (2025)Machine-learning component for multi-start metaheuristics to solve the capacitated vehicle routing problemApplied Soft Computing10.1016/j.asoc.2025.112916173(112916)Online publication date: Apr-2025
    • (2025)Graph attention, learning 2-opt algorithm for the traveling salesman problemComplex & Intelligent Systems10.1007/s40747-024-01716-511:1Online publication date: 3-Jan-2025
    • (2025)An improved swarm optimization algorithm using exploration and evolutionary game theory for efficient exploitationThe Journal of Supercomputing10.1007/s11227-025-07007-181:4Online publication date: 4-Mar-2025
    • (2025)An improved wild horse optimization algorithm based on reinforcement learning for numerical and engineering optimizationsThe Journal of Supercomputing10.1007/s11227-024-06651-381:1Online publication date: 1-Jan-2025
    • (2025)Deep learning at the service of metaheuristics for solving numerical optimization problemsNeural Computing and Applications10.1007/s00521-024-10610-7Online publication date: 23-Jan-2025
    • (2025)Algorithm Parameters: Tuning and ControlInto a Deeper Understanding of Evolutionary Computing: Exploration, Exploitation, and Parameter Control10.1007/978-3-031-75577-4_2(153-283)Online publication date: 18-Jan-2025
    • (2024)Metaheuristics and Machine Learning ConvergenceMetaheuristic and Machine Learning Optimization Strategies for Complex Systems10.4018/979-8-3693-7842-7.ch015(276-322)Online publication date: 30-Jun-2024
    • Show More Cited By

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media