It is our great pleasure to welcome you to the proceedings of the 14th ACM/SIGEVO Workshop on Foundations of Genetic Algorithms (FOGA XIV). Since the first FOGA in 1990, the workshop has established itself as the premier event on the foundations of evolutionary computation. The goal of FOGA is to advance the theoretical understanding of randomised search heuristics and to contribute to making these algorithms more useful in practice. The workshop invites submissions on all kinds of randomised search heuristics, including but not limited to evolutionary algorithms, ant colony optimisation, artificial immune systems, particle swarm optimisation, simulated annealing, Bayesian optimisation, and other Monte Carlo methods for search and optimisation. Contributions bridging theory and practice are particularly encouraged. In addition to rigorous mathematical investigations, experimental studies contributing towards the theoretical foundations of randomised search heuristics are also welcome at FOGA.
FOGA 2017 in Copenhagen, Denmark, was the first to be hosted in a Scandinavian country. We had 23 submissions, out of which 13 papers were accepted for inclusion in these post-conference proceedings. All submissions were thoroughly peer-reviewed, including a second review phase for conditionally accepted manuscripts. We had 23 registrations from 8 different countries from four continents.
The presented papers covered an impressive variety of topics: discrete and continuous optimisation problems, single- and multi-objective optimisation, and various search heuristics, including evolution strategies, evolutionary algorithms, factored evolutionary algorithms, particle swarm optimisation, and estimation-of-distribution algorithms.
We thank Mikkel Thorup giving an inspiring keynote on Fast and Powerful Hashing using Tabulation and the University of Copenhagen for providing a spectacular setting in Festauditoriet, the picturesque lecture hall in the former Royal Veterinary and Agricultural University.
Proceeding Downloads
Fast and Powerful Hashing using Tabulation
Randomized algorithms are often enjoyed for their simplicity, but the hash functions employed to yield the desired probabilistic guarantees are often too complicated to be practical. Here we survey recent results on how simple hashing schemes based on ...
An Application of Stochastic Differential Equations to Evolutionary Algorithms
There has been renewed interest in modelling the behaviour of evolutionary algorithms (EAs) by more traditional mathematical objects, such as ordinary differential equations or Markov chains. The advantage is that the analysis becomes greatly ...
Runtime Analysis of a Discrete Particle Swarm Optimization Algorithm on Sorting and OneMax
We present the analysis of a discrete particle swarm optimization (PSO) algorithm that works on a significantly large class of discrete optimization problems. Assuming a black-box setting, we prove upper and lower bounds on the expected number of ...
Resampling vs Recombination: a Statistical Run Time Estimation
Noise is pervasive in real-world optimization, but there is still little understanding of the interplay between the operators of randomized search heuristics and explicit noise-handling techniques, such as statistical resampling. In this paper, we ...
On the Use of the Dual Formulation for Minimum Weighted Vertex Cover in Evolutionary Algorithms
We consider the weighted minimum vertex cover problem and investigate how its dual formulation can be exploited to design evolutionary algorithms that provably obtain a 2-approximation. Investigating multi-valued representations, we show that variants ...
Analysis of the (1+1) EA on Subclasses of Linear Functions under Uniform and Linear Constraints
Linear functions have gained a lot of attention in the area of run time analysis of evolutionary computation methods and the corresponding analyses have provided many effective tools for analyzing more complex problems. In this paper, we consider the ...
Analysis of the Clearing Diversity-Preserving Mechanism
Clearing is a niching method inspired by the principle of assigning the available resources among a subpopulation to a single individual. The clearing procedure supplies these resources only to the best individual of each subpopulation: the winner. So ...
Lower Bounds on the Run Time of the Univariate Marginal Distribution Algorithm on OneMax
The Univariate Marginal Distribution Algorithm (UMDA), a popular estimation of distribution algorithm, is studied from a run time perspective. On the classical OneMax benchmark function, a lower bound of Ω(μ√n + n log n), where μ is the population size, ...
Convergence of Factored Evolutionary Algorithms
Factored Evolutionary Algorithms (FEA) have been found to be an effective way to optimize single objective functions by partitioning the variables in the function into overlapping subpopulations, or factors. While there exist several works empirically ...
Hypervolume Subset Selection for Triangular and Inverted Triangular Pareto Fronts of Three-Objective Problems
Hypervolume subset selection is to find a pre-specified number of solutions for hypervolume maximization. The optimal distribution of solutions on the Pareto front has been theoretically studied for two-objective problems in the literature. In this ...
Quality Gain Analysis of the Weighted Recombination Evolution Strategy on General Convex Quadratic Functions
We investigate evolution strategies with weighted recombination on general convex quadratic functions. We derive the asymptotic quality gain in the limit of the dimension to infinity, and derive the optimal recombination weights and the optimal step-...
On the Statistical Learning Ability of Evolution Strategies
We explore the ability of Evolution Strategies (ESs) to statistically learn the local landscape. Specifically, we consider ESs operating only with isotropic Gaussian mutations near the optimum and investigate the covariance matrix when constructed out ...
Qualitative and Quantitative Assessment of Step Size Adaptation Rules
We present a comparison of step size adaptation methods for evolution strategies, covering recent developments in the field. Following recent work by Hansen et al. we formulate a concise list of performance criteria: a) fast convergence of the mean, b) ...
Linearly Convergent Evolution Strategies via Augmented Lagrangian Constraint Handling
We analyze linear convergence of an evolution strategy for constrained optimization with an augmented Lagrangian constraint handling approach. We study the case of multiple active linear constraints and use a Markov chain approach---used to analyze ...
Index Terms
- Proceedings of the 14th ACM/SIGEVO Conference on Foundations of Genetic Algorithms