ABSTRACT
In this extended abstract, we look at the common practice of using optimization problem test suites to develop and/or evaluate optimization algorithms, and bring to bear on this practice a number of results available from computational learning theory. This enables optimization algorithm developers to express principled quantitative bounds on the likely performance of their algorithms on unseen problem instances, on the basis of details of their experimental design and empirical results on training or test instances. We first recap some relevant results from computational learning theory, and then describe how optimization development practice can be suitably recast in a way that enables these results to be applied. We then briefly discuss some related implications. An updated version of this article and associated material, including statistical tables relating to generalization bounds, are provided at http://is.gd/evalopt.
- Blumer, A., Ehrenfeucht, A, Haussler, D., Warmuth, M. (1987) Occam's Razor, Information Processing Letters, 24:377--380. Google ScholarDigital Library
- Langford, J. (2005) Tutorial on Practical Prediction Theory for Classification, Journal of Machine Learning Research 6 (2005) 273--306. Google ScholarDigital Library
- McAllester, D. (1999) PAC-Bayesian Model Averaging, in Proc. Annual Conf. on Computational Learning Theory (COLT), pp. 164--170. Google ScholarDigital Library
- Valiant, L. G. (1984) A theory of the learnable. Communications of the ACM, 27(11):1134--1142. Google ScholarDigital Library
- Vapnik, V. N., Chervonenkis, Y. (1971) On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16(2):264--280.Google Scholar
Index Terms
- Evaluating optimization algorithms: bounds on the performance of optimizers on unseen problems
Recommendations
Development and investigation of efficient artificial bee colony algorithm for numerical function optimization
Artificial bee colony algorithm (ABC), which is inspired by the foraging behavior of honey bee swarm, is a biological-inspired optimization. It shows more effective than genetic algorithm (GA), particle swarm optimization (PSO) and ant colony ...
Chaotic dynamic weight particle swarm optimization for numerical function optimization
Particle swarm optimization (PSO), which is inspired by social behaviors of individuals in bird swarms, is a nature-inspired and global optimization algorithm. The PSO method is easy to implement and has shown good performance for many real-world ...
Enforced mutation to enhancing the capability of particle swarm optimization algorithms
ICSI'11: Proceedings of the Second international conference on Advances in swarm intelligence - Volume Part IParticle Swarm Optimization (PSO), proposed by Professor Kennedy and Eberhart in 1995, attracts many attentions to solve for a lot of optimization problems nowadays. Due to its simplicity of setting-parameters and computational efficiency, it becomes ...
Comments