Computing, Artificial Intelligence and Information Management
Empirical analysis of self-adaptive differential evolution

https://doi.org/10.1016/j.ejor.2006.10.020Get rights and content

Abstract

Differential evolution (DE) is generally considered as a reliable, accurate, robust and fast optimization technique. DE has been successfully applied to solve a wide range of numerical optimization problems. However, the user is required to set the values of the control parameters of DE for each problem. Such parameter tuning is a time consuming task. In this paper, a self-adaptive DE (SDE) algorithm which eliminates the need for manual tuning of control parameters is empirically analyzed. The performance of SDE is investigated and compared with other well-known approaches. The experiments conducted show that SDE generally outperform other DE algorithms in all the benchmark functions. Moreover, the performance of SDE using the ring neighborhood topology is investigated.

Introduction

Evolutionary algorithms (EAs) are general-purpose stochastic search methods simulating natural selection and biological evolution. EAs differ from other optimization methods, such as hill-climbing (Michalewicz and Fogel, 2000) and simulated annealing (Van Laarhoven and Aarts, 1987), in the fact that EAs maintain a population of potential (or candidate) solutions to a problem, and not just one solution.

Generally, all EAs work as follows: a population of individuals is randomly initialized where each individual represents a potential solution to the problem at hand. The quality of each solution is evaluated using a fitness function. A selection process is applied during each iteration of an EA in order to form a new population. The selection process is biased toward the fitter individuals to increase their chances of being included in the new population. Individuals are altered using unary transformation (mutation) and higher-order transformation (crossover). This procedure is repeated until convergence is reached. The best solution found is expected to be a near-optimum solution (Michalewicz, 1996).

The unary and higher-order transformations are referred to as evolutionary operators. The two most frequently used evolutionary operators are:

  • Mutation, which modifies an individual by small random changes to generate a new individual (Michalewicz, 1996). This change can be done by inverting the value of a binary digit in the case of binary representations, or by adding (or subtracting) a small random value to (or from) selected values in the case of floating-point representations. The main objective of mutation is to add some diversity by introducing more genetic material into the population in order to avoid being trapped in a local optimum. Generally, mutation is applied using a low probability. However, some problems (e.g. problems using floating-point representations or problems with highly convoluted search spaces) require that mutation be applied at a higher probability (Salman, 1999). A preferred strategy is to start with a high probability of mutation and decreasing it over time, which initially bias towards more exploration of the search space, and focusing on exploitation in later generations.

  • Recombination (or Crossover), where parts from two (or more) individuals are combined together to generate new individuals (Michalewicz, 1996). The main objective of crossover is to explore new areas of the search space (Salman, 1999).

There are four major paradigms in evolutionary computation: genetic programming (GP) (Koza, 1992, Koza and Poli, 2005), evolutionary programming (EP) (Fogel, 1994), evolution strategies (ES) (Bäck et al., 1991) and genetic algorithms (GA) (Goldberg, 1989, Sastry et al., 2005).

Due to its population-based nature, EAs can avoid being trapped in a local optimum, and consequently have the ability to find global optimal solutions. Thus, EAs can be viewed as global optimization algorithms. However, it should be noted that EAs may fail to converge to a global optimum.

EAs have been successfully applied to a wide range of optimization problems, for example, image processing, pattern recognition, scheduling, engineering design, amongst others (Goldberg, 1989).

Recently, Storn and Price (1995) proposed a new EA called differential evolution (DE). DE is similar to GAs in that a population of individuals are used to search for an optimal solution (Feoktistov and Janaqi, 2004). The main difference between GAs and DE is that, in GAs, mutation is the result of small perturbations to the genes of an individual while in DE mutation is the result of arithmetic combinations of individuals (Feoktistov and Janaqi, 2004). At the beginning of the evolution process, the mutation operator of DE favors exploration. As evolution progresses, the mutation operator favors exploitation (Xue, 2003). Hence, DE automatically adapts the mutation increments (i.e. search step) to the best value based on the stage of the evolutionary process. Mutation in DE is therefore not based on a predefined probability density function.

DE is easy to implement, requires little parameter tuning (Paterlini and Krink, 2004) and exhibits fast convergence (Karaboga and Okdem, 2004). However, according to Krink et al. (2004), noise may adversely affect the performance of DE due to its greedy nature.

DE has been successfully applied to solve a wide range of optimization problems such as clustering (Paterlini and Krink, 2004), unsupervised image classification (Omran et al., 2005a), digital filter design (Storn, 1995), optimization of non-linear functions (Babu and Angira, 2001), global optimization of non-linear chemical engineering processes (Angira and Babu, 2003) and multi-objective optimization (Abbass, 2002a, Babu and Jehan, 2003). In short, DE is now generally considered as a reliable, accurate, robust and fast optimization technique. However, the user has to find the best values for the problem-dependent control parameters used in DE. Finding the best values for the control parameters is a time consuming task. This paper empirically analyzes a new version of DE proposed by Omran et al. (2005b) where the control parameters are self-adaptive. The new version is called self-adaptive differential evolution (SDE). The results of the experiments conducted are shown and compared with the versions of DE proposed by Price and Storn (2005), a self-adaptive version of DE proposed by Abbass (2002b) and two well-known self-adaptive evolutionary programming algorithms (Bäck and Schwefel, 1993, Yao et al., 1999). Furthermore, the performance of SDE using the ring neighborhood topology is investigated.

The remainder of the paper is organized as follows: Section 2 provides an overview of DE. SDE is summarized in Section 3. The benchmark functions to analyze the performance of SDE are given in Section 4. Results of the experiments are presented and discussed in Section 5. Finally, Section 6 concludes the paper.

Section snippets

Differential evolution

Unlike other evolutionary algorithms, differential evolution (DE) does not make use of some probability distribution function in order to introduce variations into the population. Instead, DE uses the differences between randomly selected vectors (individuals) as the source of random variations for a third vector (individual), referred to as the target vector. Trial solutions are generated by adding weighted difference vectors to the target vector. This process is referred to as the mutation

Self-adaptive differential evolution (SDE)

Due to the success achieved in SPDE by self-adapting Pr, this paper proposed that the same mechanism be applied to self-adapt the value of F. It is also proposed that Pr is generated for each individual from a normal distribution. The resulting algorithm is referred to as the self-adapting DE (SDE).

For SDE, the mutation operator changes as follows:vi(t)=xi3(t)+Fi(t)(xi1(t)-xi2(t)),whereFi(t)=Fi4(t)+N(0,0.5)×(Fi5(t)-Fi6(t))with i4  i5  i6 and i4, i5, i6  U (1,  , s).

For the crossover operator, Pr  N

Benchmark functions

This section lists the benchmark functions used to compare the performance of SDE with that of other adaptive methods. These benchmark functions provide a balance of unimodal and multimodal functions, taken from evolutionary computation literature (Yao et al., 1999, Krink et al., 2004, Feoktistov and Janaqi, 2004).

For each of these functions, the goal is to find the global minimizer, formally defined asGivenf:RNdRfindxRNdsuch thatf(x)f(x)xRNd.The following functions were used:

A. Sphere

Experimental results

This section compares SDE with other DE strategies proposed by Price and Storn (2005), SPDE (Abbass, 2002b) (all the steps dealing with multi-objective functions were removed), classical evolutionary programming (CEP) (Bäck and Schwefel, 1993) and fast evolutionary programming (FEP) (Yao et al., 1999). Note that the CEP and FEP algorithms also use self-adaptation to find proper values for control parameters. The effect of noise on the performance of SDE is investigated in comparison with the

Conclusions

The performance of DE is sensitive to the choice of control parameters. Finding the best values for these parameters for each problem is a time consuming task. This paper investigated a self-adaptive version of DE, called SDE. The approach was tested on nine benchmark functions where it generally outperformed other well-known versions of DE (including other adaptive versions). This paper investigated the effect of noise on the performance of SDE and found that noise degrades the performance of

References (38)

  • H. Abbass

    A memetic pareto evolutionary approach to artificial neural networks

  • Abbass, H., 2002b. The self-adaptive pareto differential evolution algorithm. In: Proceedings of the IEEE Congress on...
  • Angira, R., Babu, B., 2003. Evolutionary computation for global optimization of non-linear chemical engineering...
  • Babu, B., Jehan, M., 2003. Differential evolution for multi-objective optimization. In: proceedings of the IEEE...
  • Babu, B., Angira, R., 2001. Optimization of non-linear functions using evolutionary computation. In: Proceedings of the...
  • Bäck, T., Hoffmeister, F., Schwefel, H., 1991. A survey of evolution strategies. In: Proceedings of the Fourth...
  • T. Bäck et al.

    An overview of evolutionary algorithms for parameter optimization

    Evolutionary Computation

    (1993)
  • Bui, .L., Shan, Y., Qi, F., Abbass, H., 2005. Comparing two versions of differential evolution in real parameter...
  • Feoktistov, V., Janaqi, S., 2004. Generalization of the strategies in differential evolution. In: the Proceedings of...
  • L. Fogel

    Evolutionary programming in perspective: The top-down view

  • D. Goldberg

    Genetic Algorithms in Search Optimization and Machine Learning

    (1989)
  • D. Karaboga et al.

    A simple and global optimization algorithm for engineering problems: differential evolution algorithm

    Turk Journal of Electrical Engineering

    (2004)
  • J. Koza

    Genetic Programming: On the Programming of Computers by means of Natural Selection

    (1992)
  • J. Koza et al.

    Genetic programming

  • Krink, T., Filipic, B., Fogel, G., 2004. Noisy optimization problems – a particular challenge for differential...
  • Lampinen, J., Zelinka, I., 2000. On stagnation of the differential evolution algorithm. In: Proceedings of the 6th...
  • Liu, J., Lampinen, J., 2002. A fuzzy adaptive differential revolution algorithm. In: Proceedings of the IEEE...
  • Z. Michalewicz

    Genetic Algorithms + Data Structures = Evolution Programs

    (1996)
  • Z. Michalewicz et al.

    How to Solve It: Modern Heuristics

    (2000)
  • Cited by (162)

    • Fuzzy self-tuning differential evolution for optimal product line design

      2020, European Journal of Operational Research
      Citation Excerpt :

      As a result, DE may not be as effective using the same parameters when performing on different datasets. To overcome this, an automatic parameter tuning method can be used, such as the self-adaptive DE (Al-Anzi & Allahverdi, 2007; Fan, Yan, & Zhang, 2018; Salman, Engelbrecht, & Omran, 2007; Zhao et al., 2016). In this subsection the fully-automated version of DE, called Fuzzy Self-Tuning DE (FSTDE) is described, where the values of DE settings are dynamically controlled by means of FL.

    View all citing articles on Scopus
    View full text