Do additional target points speed up evolutionary algorithms?

https://doi.org/10.1016/j.tcs.2023.113757Get rights and content

Highlights

  • We study the placement and size of additional target points on the expected hitting time of evolutionary algorithms.

  • We show that adding exponentially many targets uniformly at random has a negligible effect on the expected optimisation.

  • Considering all points within a Hamming-ball of size r around all optima, improvements depend on the considered function.

  • On functions where search typically traverses through a single search point, adding one target can yield drastic speedups.

Abstract

Most runtime analyses of randomised search heuristics focus on the expected number of function evaluations to find a unique global optimum. We ask a fundamental question: if additional search points are declared optimal, or declared as desirable target points, do these additional optima speed up evolutionary algorithms? More formally, we analyse the expected hitting time of a target set OPTS where S is a set of non-optimal search points and OPT is the set of optima and compare it to the expected hitting time of OPT.

We show that the answer to our question depends on the number and placement of search points in S. For all black-box algorithms and all fitness functions with polynomial expected optimisation times we show that, if additional optima are placed randomly, even an exponential number of optima has a negligible effect on the expected optimisation time. Considering Hamming balls around all global optima gives an easier target for some algorithms and functions and can shift the phase transition with respect to offspring population sizes in the (1,λ) EA on OneMax. However, for the one-dimensional Ising model the time to reach Hamming balls of radius (1/2ε)n around optima does not reduce the asymptotic expected optimisation time in the worst case. Finally, on functions where search trajectories typically join in a single search point, turning one search point into an optimum drastically reduces the expected optimisation time.

Section snippets

Motivation

Runtime analysis has emerged as a fruitful research area that is helping to develop and consolidate our understanding of the performance of evolutionary algorithms and many other randomised search heuristics [1], [2], [3], [4].

Many results have been obtained for problems from combinatorial optimisation [1] and for pseudo-Boolean functions f:{0,1}nR. The latter includes frequently used benchmark functions like OneMax, linear functions, LeadingOnes, Ridge, Jump, Cliff, Plateau (see Section 2 for

Preliminaries

We consider the very general class of all black-box algorithms shown in Algorithm 1 that comprises meta-heuristics such as evolutionary algorithms, ant colony optimisation, particle swarm optimisation, estimation-of-distribution algorithms, artificial immune systems, tabu search, simulated annealing, and many more. Here OPTf denotes the set of optimal solutions for the function f and S is a set of additional target points. Note that in the classical setting S=.

We also show more specific

Random and worst-case target points

We first consider the simple scenario where additional target points are added uniformly at random as an average case scenario for the placement of additional optima. It is not surprising that these additional targets do not help much if they only make up a tiny fraction of the whole search space.

Adding targets around optima

We now consider scenarios where additional optima or target points are placed “close” to global optima. There are several possible notions of “close”. We could aim to reach a solution of a specified minimum fitness. This scenario is highly relevant for practice and has been investigated implicitly in several works (e.g. [43], [44]). Recently, it was studied explicitly under the term fixed target runtime analysis [18].

We cite a fixed-target result for LeadingOnes for illustration. It is notable

Best-case placement of additional optima

Finally, we consider a best possible placement of additional target points, and how much the expected running time can be decreased by carefully choosing additional targets.

A rather obvious example of a huge benefit through added optima is the function Trap, a deceptive function on which the (1+1) EA requires time Θ(nn) [11]. Turning the trap into a target point turns the function into OneMax with an additional optimum at 0n.

Theorem 5.1

The expected time for the (1+1) EA operating on Trap to find {0n,1n}

Conclusions

Runtime analysis of randomised search heuristics concentrates on the expected number of fitness function evaluations until a certain target point (often a unique global optimum) is hit for the first time. We studied how the expected optimisation time changes if, in addition to global optima, additional target points are considered.

Our results points out that it depends on the size and placement of the target points as well as characteristics of the fitness function. We considered a worst-case

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References (53)

  • T. Jansen

    Analyzing Evolutionary Algorithms – The Computer Science Perspective

    (2013)
  • T. Jansen et al.

    On the choice of the offspring population size in evolutionary algorithms

    Evol. Comput.

    (2005)
  • J. Lengler et al.

    Drift analysis and evolutionary algorithms revisited

    Comb. Probab. Comput.

    (2018)
  • B. Doerr et al.

    Mutation rate matters even when optimizing monotonic functions

    Evol. Comput.

    (2013)
  • J. Lengler

    A general dichotomy of evolutionary algorithms on monotone functions

    IEEE Trans. Evol. Comput.

    (2020)
  • J. Lengler et al.

    Exponential slowdown for larger populations: the (μ+1)-EA on monotone functions

  • D. Sudholt

    A new method for lower bounds on the running time of evolutionary algorithms

    IEEE Trans. Evol. Comput.

    (2013)
  • B. Doerr et al.

    Lower bounds from fitness levels made easy

  • S. Droste et al.

    Upper and lower bounds for randomized search heuristics in black-box optimization

    Theory Comput. Syst.

    (2006)
  • P.K. Lehre et al.

    Black-box search by unbiased variation

    Algorithmica

    (2012)
  • C. Doerr

    Complexity theory for discrete black-box optimization heuristics

  • M. Buzdalov et al.

    Fixed-target runtime analysis

  • M. Buzdalov et al.

    Fixed-target runtime analysis

    Algorithmica

    (2022)
  • W. Gao et al.

    Runtime analysis for maximizing population diversity in single-objective optimization

  • Cited by (1)

    This article belongs to Section C: Theory of natural computing, Edited by Lila Kari.

    View full text