Abstract
The suitability of an optimisation algorithm selected from within an algorithm portfolio depends upon the features of the particular instance to be solved. Understanding the relative strengths and weaknesses of different algorithms in the portfolio is crucial for effective performance prediction, automated algorithm selection, and to generate knowledge about the ideal conditions for each algorithm to influence better algorithm design. Relying on well-studied benchmark instances, or randomly generated instances, limits our ability to truly challenge each of the algorithms in a portfolio and determine these ideal conditions. Instead we use an evolutionary algorithm to evolve instances that are uniquely easy or hard for each algorithm, thus providing a more direct method for studying the relative strengths and weaknesses of each algorithm. The proposed methodology ensures that the meta-data is sufficient to be able to learn the features of the instances that uniquely characterise the ideal conditions for each algorithm. A case study is presented based on a comprehensive study of the performance of two heuristics on the Travelling Salesman Problem. The results show that prediction of search effort as well as the best performing algorithm for a given instance can be achieved with high accuracy.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Applegate, D., Cook, W., Rohe, A.: Chained Lin-Kernighan for large traveling salesman problems. INFORMS J. Comput. 15(1), 82–92 (2003)
Bachelet, V.: Métaheuristiques parallèles hybrides: application au problème d’affectation quadratique. Ph.D. thesis, Universite des Sciences et Technologies de Lille (1999)
Battiti, R.: Using mutual information for selecting features in supervised neural net learning. IEEE Trans. Neural Netw 5(4), 537–550 (1994)
Burke, E., Kendall, G., Newall, J., Hart, E., Ross, P., Schulenburg, S.: Hyper-heuristics: an emerging direction in modern search technology. International Series in Operations Research and Management Science, pp. 457–474 (2003)
Cheeseman, P., Kanefsky, B., Taylor, W.: Where the really hard problems are. In: Proceedings of the 12th International Joint Conference on Artificial Intelligence (IJCAI), pp. 331–337 (1991)
Cho, Y., Moore, J., Hill, R., Reilly, C.: Exploiting empirical knowledge for bi-dimensional knapsack problem heuristics. International Journal of Industrial and Systems Engineering 3(5), 530–548 (2008)
Corne, D., Reynolds, A.: Optimisation and generalisation: footprints in instance space. Parallel Problem Solving from Nature–PPSN XI, pp. 22–31 (2010)
Gaertner, D., Clark, K.: On optimal parameters for ant colony optimization algorithms. In: Proceedings of the 2005 International Conference on Artificial Intelligence, vol. 1, pp. 83–89 (2005)
Gent, I., Walsh, T.: The TSP phase transition. Artif. Intell. 88(1–2), 349–358 (1996)
Gras, R.: How efficient are genetic algorithms to solve high epistasis deceptive problems? In: IEEE Congress on Evolutionary Computation. CEC 2008. (IEEE World Congress on Computational Intelligence), pp. 242–249 (2008)
Hall, N., Posner, M.: Performance prediction and preselection for optimization and heuristic solution procedures. Oper. Res. 55(4), 703–716 (2007)
van Hemert, J.: Property analysis of symmetric travelling salesman problem instances acquired through evolution. In: Proceedings of the European Conference on Evolutionary Computation in Combinatorial Optimization (EvoCop 2005). LNCS, vol. 3448, pp. 122–131, Springer (2005)
van Hemert, J.: Evolving combinatorial problem instances that are difficult to solve. Evol. Comput. 14(4), 433–462 (2006)
van Hemert, J., Urquhart, N.: Phase transition properties of clustered travelling salesman problem instances generated with evolutionary computation. In: Parallel Problem Solving from Nature-PPSN VIII. LNCS, vol. 3242, pp. 151–160, Springer (2004)
Johnson, D., McGeoch, L.: The traveling salesman problem: a case study. In: Aarts, E., Lenstra, J. (eds.) Local Search in Combinatorial Optimization, chap. 8, pp. 215–310. John Wiley & Sons, Inc (1997)
Kilby, P., Slaney, J., Walsh, T.: The backbone of the travelling salesperson. In: International Joint Conference on Artificial Intelligence, vol. 19, pp. 175–180 (2005)
Kohonen, T.: Self-organization maps. Proc. IEEE 78, 1464–1480 (1990)
Kratica, J., Ljubić, I., Tošic, D.: A genetic algorithm for the index selection problem. In: Raidl, G., et al. (eds.) Applications of Evolutionary Computation, vol. 2611, pp. 281–291. Springer-Verlag (2003)
Leyton-Brown, K., Nudelman, E., Shoham, Y.: Learning the empirical hardness of optimization problems: The case of combinatorial auctions. In: Principles and Practice of Constraint Programming-CP 2002. Lecture Notes in Computer Science. vol. 2470, pp. 556–572, Springer (2002)
Leyton-Brown, K., Nudelman, E., Shoham, Y.: Empirical hardness models: Methodology and a case study on combinatorial auctions. J. ACM (JACM) 56(4), 1–52 (2009)
Lin, S., Kernighan, B.: An efficient heuristic algorithm for the traveling salesman problem. Oper. Res. 21(2), 498–516 (1973)
Locatelli, M., Wood, G.: Objective Function Features Providing Barriers to Rapid Global Optimization. J. Glob. Optim. 31(4), 549–565 (2005)
Macready, W., Wolpert, D.: What makes an optimization problem hard. Complexity 5, 40–46 (1996)
Nudelman, E., Leyton-Brown, K., Hoos, H., Devkar, A., Shoham, Y.: Understanding random SAT: beyond the clauses-to-variables ratio. Principles and Practice of Constraint Programming–CP, 2004. Lecture Notes in Computer Science, vol. 3258, pp. 438–452 (2004)
Pfahringer, B., Bensusan, H., Giraud-Carrier, C.: Meta-learning by landmarking various learning algorithms. In: Proceedings of the Seventeenth International Conference on Machine Learning table of contents, pp. 743–750. Morgan Kaufmann Publishers Inc. San Francisco, CA, USA (2000)
Reeves, C.: Landscapes, operators and heuristic search. Ann. Oper. Res. 86, 473–490 (1999)
Rice, J.: The Algorithm Selection Problem. Adv. Comput. 15, 65–118 (1976)
Ridge, E., Kudenko, D.: An analysis of problem difficulty for a class of optimisation heuristics. Evolutionary Computation in Combinatorial Optimization. Lecture Notes in Computer Science, vol. 4446, pp. 198 (2007)
Sander, J., Ester, M., Kriegel, H., Xu, X.: Density-based clustering in spatial databases: The algorithm gdbscan and its applications. Data Mining and Knowledge Discovery 2(2), 169–194 (1998)
Schiavinotto, T., Stützle, T.: A review of metrics on permutations for search landscape analysis. Comput. Oper. Res. 34(10), 3143–3153 (2007)
Smith-Miles, K.: Towards insightful algorithm selection for optimisation using meta-learning concepts. In: IEEE International Joint Conference on Neural Networks, 2008. IJCNN 2008. (IEEE World Congress on Computational Intelligence), pp. 4118–4124 (2008)
Smith-Miles, K., van Hemert, J., Lim, X.: Understanding TSP difficulty by learning from evolved instances. In: Proceedings of the 4th Learning and Intelligent Optimization conference. Lecture Notes in Computer Science, vol. 6073, pp. 266–280 (2010)
Smith-Miles, K., James, R., Giffin, J., Tu, Y.: A knowledge discovery approach to understanding relationships between scheduling problem structure and heuristic performance. In: Proceedings of the 3rd Learning and Intelligent Optimization conference. Lecture Notes in Computer Science, vol. 5851, pp. 89–103 (2009)
Smith-Miles, K.A., Lopes, L.B.: Measuring Instance Difficulty for Combinatorial Optimization Problems. Computers and Operations Research, under revision (2011)
SOMine, V.: Enterprise Edition Version 3.0. Eudaptics Software Gmbh (1999)
Stadler, P., Schnabl, W.: The landscape of the traveling salesman problem. Phys. Lett. A 161(4), 337–344 (1992)
Thiebaux, S., Slaney, J., Kilby, P.: Estimating the hardness of optimisation. In: Proceedings of the European Conference on Artificial Intelligence, pp. 123–130 (2000)
Vasconcelos, N.: Feature selection by maximum marginal diversity: optimality and implications for visual recognition. In: 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings, vol. 1 (2003)
Xin, B., Chen, J., Pan, F.: Problem difficulty analysis for particle swarm optimization: deception and modality. In: Proceedings of the first ACM/SIGEVO Summit on Genetic and Evolutionary Computation, pp. 623–630 (2009)
Xu, L., Hutter, F., Hoos, H., Leyton-Brown, K.: SATzilla-07: The design and analysis of an algorithm portfolio for SAT. In: Proceedings of the 13th International Conference on Principles and Practice of Constraint Programming. Lecture Notes in Computer Science, vol. 4741, pp. 712–727 (2007)
Zhang, W.: Phase transitions and backbones of the asymmetric traveling salesman problem. J. Artif. Intell. Res. 21, 471–497 (2004)
Zhang, W., Korf, R.: A study of complexity transitions on the asymmetric traveling salesman problem. Artif. Intell. 81(1–2), 223–239 (1996)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Smith-Miles, K., van Hemert, J. Discovering the suitability of optimisation algorithms by learning from evolved instances. Ann Math Artif Intell 61, 87–104 (2011). https://doi.org/10.1007/s10472-011-9230-5
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10472-011-9230-5
Keywords
- Algorithm selection
- Combinatorial optimization
- Travelling salesman problem
- Hardness prediction
- Phase transition
- Instance difficulty