Elsevier

Computer Science Review

Volume 46, November 2022, 100512
Computer Science Review

Review article
Hybridizations of evolutionary algorithms with Large Neighborhood Search

https://doi.org/10.1016/j.cosrev.2022.100512Get rights and content

Abstract

Recent developments of evolutionary algorithms (EAs) for discrete optimization problems are often characterized by the hybridization of EAs with local search methods, in particular, with Large Neighborhood Search. In this survey, we consider some of the most promising directions of this kind of hybridization and provide examples in the context of well-known optimization problems. We distinguish different approaches by the algorithmic components in which they make use of Large Neighborhood Search: initialization, recombination and the local improvement stages of hybrid EAs.

Introduction

Evolutionary algorithms (EAs), including genetic algorithms (GAs) [1], [2], evolutionary strategies [3], genetic programming [4], differential evolution [5], and artificial immune systems [6], are based on the simulation of biological evolution for solving optimization problems. In these algorithms, a population consists of individuals that, in most cases, represent tentative solutions (called phenotypes) to an optimization problem. The fitness of an individual is calculated on the basis of the value of the objective function of the tackled optimization problem. The tentative solutions (individuals) are often, but not necessarily, encoded as sequences of characters of a certain alphabet, which are called genotypes. Evolutionary algorithms demonstrate some features known from the genetics of populations (such as punctuated equilibria [7], genotype modularity [8], error thresholds for the mutation rate [9], [10] etc.), but they also exhibit features that are not known from their natural counterparts. Currently, EAs are widely used for solving hard optimization problems in discrete as well as in continuous domains. In this survey, however, we limit ourselves to discrete optimization.

It is generally recognized that one of the strengths of EAs is their capability for exploring the search space of the tackled optimization problem [11]. On the other side, EAs are usually said to have a weakness in finding the best solutions within a confined area of the search space (exploitation) [12]. Therefore, EAs are often combined with local search methods with the goal of improving individual solutions. Such a procedure has no counterpart in biological evolutionary processes, which originally gave rise to the field of evolutionary computation. Local improvements of such kind, however, may be observed in social systems, if they—that is, the improvements—are viewed from the perspective of memetics [13] as abilities of self-improving individuals. This analogy is expressed explicitly in the class of memetic algorithms, which make up a significant sub-class of hybrid evolutionary algorithms. A memetic algorithm is defined as an algorithmic hybridization of a population-based method, and one or more local search techniques and/or problem-specific constructive heuristics (see e.g. [14]). This definition is, in fact, so broad that most of the existing hybrid evolutionary algorithms may be regarded as memetic algorithms. Moreover, memetic algorithms are the most well-known representatives of methods from the broader field of memetic computing [15], which studies computing structures composed of interacting modules (memes) whose evolution dynamics is inspired by the diffusion of ideas [14]. This field emerged due to the combination of methods from several branches of computer science (evolutionary computation, multiobjective optimization, machine learning, mathematical programming, approximation algorithms etc.), and due to the transfer of methods from other sciences, such as biology, sociology, and physics.

More recent applications of EAs to discrete optimization problems are often characterized by the hybridization with a specific type of local search known as large neighborhood search (LNS) [16] or very large-scale neighborhood search [17]. Generally speaking, LNS algorithms are local search methods that explore a large—sometimes exponentially sized—neighborhood at each step. Depending on (1) the way in which the large neighborhoods are generated and (2) on the techniques used to explore these large neighborhoods, different terminology is used to refer to the corresponding LNS algorithms. Many LNS approaches are based for example on the principle of ruin-and-recreate [18]. The same type of technique is sometimes called destroy-and-recreate or destroy-and-rebuild. At each iteration, first, the current solution is partially destroyed. Then, an exact technique or an approximate technique—such as, for example, a greedy heuristic—is applied for finding an improving solution among all solutions that include the respective partial solution. The large neighborhoods generated in this context are known as destruction-based large neighborhoods. Generally, a time limit is imposed on this last step in order to avoid wasting valuable computation time. In other words, even if an exact technique is used for exploring the respective large neighborhood, the returned solution is often not the best one possible. Numerous applications of this type of LNS can be found in the related literature, including for example [19], [20], [21]. However, there are many well-known alternative ways of defining large neighborhoods as seen, for example, in local branching [22], the corridor method [23], and POPMUSIC [24]. Finally, note that, in this survey, we also regard techniques as being LNS methods that, at each application, perform only one LNS step. A well known example concerns optimal recombination and solution merging, where two or more parent solutions are merged and an exact or an approximate technique is used for searching an improving solution in the resulting sub-instance. In this survey, we will consider some of the most promising approaches to employing LNS in EAs for discrete optimization problems and provide examples of this type of hybridization.

The general scheme of a hybrid evolutionary algorithm is given in Algorithm 1. Note that, in some implementations of hybrid EAs, the order of the application of the mutation and recombination operators (see Steps 2.2 and 2.3) is reverted.

Hybridization methods that arise in the initialization stage, in recombination (in the crossover operators) and in the local improvement stage, before a tentative solution is added into the population, will be considered in this survey. In each of these cases, sub-instances of the tackled problem instances are formulated and solved exactly or approximately.

LNS methods are often used in hybrid evolutionary algorithms at the initialization stage and before an offspring is added into a population. Experimental results show that even neighborhoods of exponential size (w.r.t. the size of the problem input) may be effectively applied for perturbations in EAs. For example, Dynasearch, Ejection Chains, and assignment type neighborhoods can be explored in polynomial time and space in some cases [25], [26], [27], [28], [29], [30].

In most applications, the recombination (crossover) operator at Step 2.2 has a randomized behavior (see e.g. [1], [2]). In this survey, however, we will put most of the attention on deterministic recombination operators for large neighborhood exploration. A so-called optimal recombination consists of searching for the best possible offspring as a result of a binary crossover operator, which satisfies the property of the gene transmitting recombination [31]. Dynamic Programming, Branch and Cut or Branch and Bound methods, as well as specialized enumeration techniques, are successfully used for solving such sub-problems [32], [33], [34], [35], [36], [37], [38]. Crossover and mutation operators based on Mixed Integer Programming (MIP) have been proposed and experimentally tested on different discrete optimization problems [33], [39], [40], [41], [42].

Operators of evolutionary algorithms have various tunable parameters. One of the fields of active research in the area of evolutionary computation concerns adaptive parameter control and the coordination of different components. Adaptation and coordination in state-of-the-art EAs are often performed by means of fitness-based or distance-based diversity adaptive rules, adaptive hyper-heuristics, learning processes and others [15], [43], [44]. Adaptive control uses feedback from the search history to determine the direction of the further search. Parameter values are modified accordingly. In this context, note that parameters may be updated when some events occur (e.g., threshold values are reached for population diversity or size) or in accordance with the quality of the produced solutions, the current state of the population and prehistory (e.g., credit assignment and lifetime based strategies) [15], [44]. In particular, this point of view allows to consider learning-based optimization algorithms such as Ant Colony Optimization [45] as special cases of memetic algorithms (assuming the pheromone levels are a set of tunable parameters for a mutation operator). Although there is not really any reason for aiming to view different techniques under a common umbrella, it sometimes helps to identify common aspects as well as differences.

In Section 2, we review the usage of large neighborhoods in hybrid evolutionary algorithms. In particular, we discuss Large Neighborhood Search and its adaptive version, Assignment-type sub-problems, Ejection Chains, Dynasearch and Local Branching. In Section 3, we consider the sub-problems arising at the recombination stage and their computational complexity. A more general approach of solution merging and techniques based on mixed integer programming (MIP) for crossover and mutation are discussed in Section 4. Finally, hybridizations of ant colony optimization (ACO) algorithms, which are closely related to EAs, are considered in Section 5. Conclusion and open problems are provided in Section 6.

We would also like to point out again that, in this survey, we generally do not cover non-evolutionary metaheuristics. In particular, (meta)heuristics very different to EAs—such as tabu search, iterated local search, and beam search—are not considered in this survey. Information about hybrids concerning these approaches may be found in [33], [46]. One of the widely used metaheuristic methods is Variable Neighborhood Search, in which the size (and structure) of the utilized neighborhoods may change in the process of local search [47]. Despite of a large number of works successfully combining EAs with Variable Neighborhood Search (see e.g. [48], [49]), we do not consider them in this survey because generally Variable Neighborhood Search approaches make use of rather small neighborhoods (compared to LNS methods). Moreover, the body of literature on such hybrid methods deserves a separate survey. Also, in this survey, we do not go into detail for what concerns automated parameter tuning and the application of machine learning in the context of hybrid EAs. In general, most of the methods of parameter tuning and heuristic selection that are known in the area of EAs and memetic computing [15], [44] are applicable to hybrid EAs as well.

The evolutionary strategy (1 + 1) EA, one of the most simple evolutionary heuristics with a population of size 1 and “global mutation” the operator, where each bit is flipped independently with a given probability (see e.g. [50]) is a large neighborhood search method as well. However we do not consider hybridizations of the (1 + 1) EA with other EAs here because it is more natural to view the resulting algorithms just as EA variations.

Section snippets

Evolutionary algorithms and large neighborhoods

In this section, we consider applications of different LNS methods for solving the optimization problems that arise during the initialization and during the local improvement of an offspring in hybrid EAs.

Optimal recombination problem

Assume that solutions to a combinatorial optimization problem are represented by strings of length n, composed of symbols from some finite alphabet. These strings are called individuals and their components are called genes. The following definition of the optimal recombination problem (ORP) is motivated by the principles of the (strictly) gene transmitting recombination formulated by N. Radcliffe in [31], [74], [75].

Given: an instance I of a combinatorial optimization problem to be minimized,

Solution merging

Solution Merging may be considered as a generalization of optimal recombination [46] and may be used beyond the class of hybrid evolutionary algorithms captured in Algorithm 1. In solution merging, the supplementary sub-problem may be defined by two or more parent solutions and the definition of a feasible solution for this problem may be less constrained, compared to the case of the ORP. The most common approach to solution merging is to apply ready-to-use MIP solvers to these sub-problems.

Approaches closely related to EAs hybridized with LNS

Even though this section aims to present hybridizations of approaches closely related to EAs with LNS, it focuses only on hybridizations concerning ant colony optimization (ACO) [104]. The reason is that we believe that other approaches related to EAs—such as particle swarm optimization, bee colony approaches, etc—have not yet been hybridized with LNS. New solutions are generated in ACO algorithms at each iteration, based on so-called pheromone information and on greedy information. Hereby, the

Conclusions and open problems

Most of the hybrid algorithms discussed in this survey are competitive with state-of-the-art algorithms (or at least they did so at the time of their publication). This fact provides experimental evidence that hybridizations of EAs with Large Neighborhood Search methods are often worth the effort. It is important to note that most of the above mentioned examples of hybrid evolutionary algorithms employ modern MIP techniques or some problem-specific knowledge, which may be represented by means

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

C. Blum was funded by MCIN/AEI/10.13039/501100011033 through grants PID2019-104156GB-I00 and TED2021-129319B-I00. The research of A. Eremeev and Yu. Zakharova was supported by state task of the IM SB RAS, project FWNF-2022-0020 (Section 2) and the Russian Science Foundation project number 17-18-01536 (Section 3). The authors are grateful to Yuri Kochetov for his helpful comments.

References (114)

  • BontouxB. et al.

    A memetic algorithm with a large neighborhood crossover operator for the generalized traveling salesman problem

    Comput. Oper. Res.

    (2010)
  • YagiuraM. et al.

    The use of dynamic programming in genetic algorithms for permutation problems

    European J. Oper. Res.

    (1996)
  • BorisovskyP. et al.

    Genetic algorithms for a supply management problem: MIP-recombination vs greedy decoder

    European J. Oper. Res.

    (2009)
  • WenY. et al.

    A heuristic-based hybrid genetic-variable neighborhood search algorithm for task scheduling in heterogeneous multiprocessor system

    Inform. Sci.

    (2011)
  • XiaH. et al.

    A hybrid genetic algorithm with variable neighborhood search for dynamic integrated process planning and scheduling

    Comput. Ind. Eng.

    (2016)
  • GutinG.

    Exponential neighbourhood local search for the traveling salesman problem

    Comput. Oper. Res.

    (1999)
  • DrorM. et al.

    A vehicle routing improvement algorithm comparison of a greedy and a matching implementation for inventory routing

    Comput. Oper. Res.

    (1986)
  • AhujaR.K. et al.

    A survey of very large-scale neighborhood search techniques

    Discrete Appl. Math.

    (2002)
  • AziN. et al.

    An adaptive large neighborhood search for a vehicle routing problem with multiple routes

    Comput. Oper. Res.

    (2014)
  • LiuR. et al.

    A hybrid large-neighborhood search algorithm for the cumulative capacitated vehicle routing problem with time-window constraints

    Appl. Soft Comput.

    (2019)
  • VoigtS. et al.

    Hybrid adaptive large neighborhood search for vehicle routing problems with depot location decisions

    Comput. Oper. Res.

    (2022)
  • HasaniA. et al.

    Robust global supply chain network design under disruption and uncertainty considering resilience strategies: A parallel memetic algorithm for a real-life case study

    Transp. Res. E

    (2016)
  • HanP. et al.

    Multiple GEO satellites on-orbit repairing mission planning using large neighborhood search-adaptive genetic algorithm

    Adv. Space Res.

    (2022)
  • HansenP. et al.

    Variable neighborhood search and local branching

    Comput. Oper. Res.

    (2006)
  • AhujaR. et al.

    A greedy genetic algorithm for the quadratic assignment problem

    Comput. Oper. Res.

    (2000)
  • DoerrB. et al.

    More effective crossover operators for the all-pairs shortest path problem

    Theoret. Comput. Sci.

    (2013)
  • HollandJ.

    Adaptation in Natural and Artificial Systems

    (1975)
  • GoldbergD.E.

    Genetic Algorithms in Search, Optimization and Machine Learning

    (1989)
  • BeyerH.-G.

    The Theory of Evolution Strategies

    (2001)
  • KozaJ.R.

    Genetic Programming II: Automatic Discovery of Reusable Programs (Complex Adaptive Systems)

    (1994)
  • StornR. et al.

    Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces

    J. Global Optim.

    (1997)
  • BersiniH. et al.

    Hints for adaptive problem solving gleaned from immune networks

  • SpirovA.V. et al.

    Modularity in biological evolution and evolutionary computation

    Biol. Bull. Rev.

    (2020)
  • WilkeC.O.

    Quasispecies theory in the context of population genetics

    BMC Evol. Biol.

    (2005)
  • D.-C. Dang, A. Eremeev, P. Lehre, Escaping Local Optima with Non-Elitist Evolutionary Algorithms, in: Proceedings of...
  • DangD.-C. et al.

    Non-elitist evolutionary algorithms excel in fitness landscapes with Sparse Deceptive Regions and dense valleys

  • GendreauM. et al.

    Handbook of Metaheuristics

    (2010)
  • DawkinsR.

    The Selfish Gene

    (1976)
  • NeriF. et al.

    Handbook of Memetic Algorithms

    (2012)
  • PisingerD. et al.

    Large neighborhood search

  • FischettiM. et al.

    Local branching

    Math. Program. Ser. B

    (2003)
  • CasertaM. et al.

    A corridor method based hybrid algorithm for redundancy allocation

    J. Heuristics

    (2016)
  • Lalla-RuizE. et al.

    POPMUSIC as a matheuristic for the berth allocation problem

    Ann. Math. Artif. Intell.

    (2016)
  • AngelE. et al.

    A dynasearch neighborhood for the bicriteria traveling salesman problem

  • CapuaR. et al.

    A study on exponential-size neighborhoods for the bin packing problem with conflicts

    J. Heuristics

    (2018)
  • PotvinJ. et al.

    Tabu search with ejection chains for the vehicle routing problem with private fleet and common carrier

    J. Oper. Res. Soc.

    (2011)
  • RegoC. et al.

    Doubly-rooted stem-and-cycle ejection chain algorithm for the asymmetric traveling salesman problem

    Networks

    (2016)
  • RadcliffeN.

    The algebra of genetic algorithms

    Ann. Math. Artif. Intell.

    (1994)
  • BalasE. et al.

    Optimized crossover-based genetic algorithms for the maximum cardinality and maximum weight clique problems

    J. Heuristics

    (1998)
  • ChicanoF. et al.

    Quasi-optimal recombination operator

  • View full text