Elsevier

Information Sciences

Volume 181, Issue 24, 15 December 2011, Pages 5364-5386
Information Sciences

Adaptive strategy selection in differential evolution for numerical optimization: An empirical study

https://doi.org/10.1016/j.ins.2011.07.049Get rights and content

Abstract

Differential evolution (DE) is a versatile and efficient evolutionary algorithm for global numerical optimization, which has been widely used in different application fields. However, different strategies have been proposed for the generation of new solutions, and the selection of which of them should be applied is critical for the DE performance, besides being problem-dependent. In this paper, we present two DE variants with adaptive strategy selection: two different techniques, namely Probability Matching and Adaptive Pursuit, are employed in DE to autonomously select the most suitable strategy while solving the problem, according to their recent impact on the optimization process. For the measurement of this impact, four credit assignment methods are assessed, which update the known performance of each strategy in different ways, based on the relative fitness improvement achieved by its recent applications. The performance of the analyzed approaches is evaluated on 22 benchmark functions. Experimental results confirm that they are able to adaptively choose the most suitable strategy for a specific problem in an efficient way. Compared with other state-of-the-art DE variants, better results are obtained on most of the functions in terms of quality of the final solutions and convergence speed.

Introduction

Differential evolution (DE), proposed by Storn and Price [35], is an efficient and versatile population-based direct search algorithm that implements the evolutionary generation-and-test paradigm for global optimization, using the distance and direction informations from the current population to guide the search. Among its advantages are its simple structure, ease of use, speed, and robustness, which enables its application on many real-world applications, such as data mining, IIR design, neural network training [29], power systems [43], financial market dynamics modeling [16], data mining [4], and so on. A good survey of DE can be found in [5], where its basic concepts and major variants, as well as some theoretical studies and application examples to complex environments, are reviewed in detail.

In the seminal DE algorithm [35], a single mutation strategy was used for the generation of new solutions; later on, Price and Storn suggested nine other different strategies [29], [36]. In addition, other mutation strategies are also proposed in the DE literature [50], [3], [6], [8]. Although augmenting the robustness of the underlying algorithm, these many available strategies led the user to the need of defining which of them would be most suitable for the problem at hand – a difficult and crucial task for the performance of DE [31], [30], [23].

Off-line tuning techniques, such as the F-Race [1], could be used to choose the mutation strategy to be used. However, besides being computationally expensive, such techniques usually output a static setting; while, in practice, the performance of each mutation strategy does not depend on the problem itself, but rather on the characteristics of the region of the search landscape being explored by the population at each generation. Based on this, thus, in order to be more efficient, the autonomous selection of the strategy to be used should be done in a continuous way, while solving the problem, i.e., dynamically adapting itself as the search goes on.

In order to contribute on remedying this drawback, in this paper, we extend our recent work [15] on the use of adaptive strategy selection within DE for global numerical optimization. To do adaptive strategy selection, i.e., to be able to automatically select which is the best mutation strategy for the generation of each offspring while solving the problem, two elements need to be defined [48], [18]: (i) how to select between the available strategies based on their recent performance (strategy selection); and (ii) how to measure the performance of the strategies after their application, and consequently update the empirical quality estimates kept for each of them (credit assignment). In this work, two strategy selection techniques, namely Probability Matching [12] and Adaptive Pursuit [41], are independently analyzed in combination with each of four credit assignment techniques based on the relative fitness improvement. In addition, a parameter sensitivity analysis is conducted to investigate the impact of the hyper-parameters on the performance of the resulting adaptive strategy selection technique. Experiments have been conducted on 22 widely used benchmark problems, including nine test functions presented in CEC-05 [37]. The results indicate that the analyzed approach is able to select the most suitable strategy, while solving a problem at hand. Compared with other state-of-the-art DE variants, better results are obtained on most of the functions in terms of quality of final solutions and convergence speed.

Compared with our previous work in [15], the main contributions of this paper are twofold: (i) in order to pursuit the most suitable strategy at different search stages for a specific problem more rapidly, the Adaptive Pursuit technique is used and its performance is compared with the Probability Matching-based DE variant; and (ii) the comprehensive experiments are conducted to verify our approach and its performance is analyzed in detail.

The remainder of the paper is organized as follows. Section 2 briefly introduces the background and related work of this paper. In Section 3, we describe the adaptive strategy selection approaches in detail, followed by the experimental results and discussions in Section 4. Finally, Section 5 is devoted to conclusions and future work.

Section snippets

Problem formulation

Without loss of generality, in this work, we consider the following numerical optimization problem:Minimizef(x),xS,where SRD is a compact set, x = [x1, x2,  , xD]T, and D is the dimension, i.e., the number of decision variables. Generally, for each variable xj, it satisfies a boundary constraint, such that:LjxjUj,j=1,2,,D.

Differential evolution

DE [35] is a simple evolutionary algorithm (EA) for global numerical optimization. It creates new candidate solutions by combining the parent individual and several other

Adaptive strategy selection in DE

In order to automatically select the most suitable strategy while solving a problem without any prior knowledge, in this work, we analyze the use of strategy adaptation methods in DE for numerical optimization problems. This is an extension of our recent work in [15], which has been considerably enhanced, with the major differences being listed as follows:

  • In this work, two strategy selection techniques, namely Probability Matching (PM) [12] and Adaptive Pursuit (AP) [41], are analyzed and

Experimental results

In order to evaluate the performance of our approach, 22 benchmark functions were selected as the test suit. Functions f01f13 are chosen from Yao et al. [48]. Functions f01f04 are unimodal. The Rosenbrock’s function f05 is a multi-modal function when D > 3 [33]. Function f06 is the step function, which has one minimum and is discontinuous. Function f07 is a noisy quartic function. Functions f08f13 are multi-modal functions where the number of local minima increases exponentially with the

Conclusions and future work

Many mutation strategies have been proposed for generating new solutions within DE in different ways. Although allowing a very wide use of DE on many different fields of application, this number of available strategies creates an extra difficulty to the user: it is not trivial to define which strategy should be used on a given problem in order to achieve good performance. Besides, the strategies are not simply problem-dependent; indeed, their performance tends to vary as the search goes on,

Acknowledgments

The authors would like to sincerely thank the Editor and the anonymous reviewers for their constructive comments, which improved the original paper significantly.

References (50)

  • S. Das et al.

    Differential evolution using a neighborhood-based mutation operator

    IEEE Trans. Evol. Comput.

    (2009)
  • S. Das et al.

    Differential evolution: a survey of the state-of-the-art

    IEEE Trans. Evol. Comput.

    (2011)
  • B. Dorronsoro et al.

    Improving classical and decentralized differential evolution with new mutation operator and population topologies

    IEEE Trans. Evol. Comput.

    (2011)
  • A.E. Eiben et al.

    Parameter control in evolutionary algorithms

  • M.G. Epitropakis et al.

    Enhancing differential evolution utilizing proximity-based mutation operators

    IEEE Trans. Evol. Comput.

    (2011)
  • K. Fang et al.

    Orthogonal and Uniform Design

    (2001)
  • Á. Fialho, M. Schoenauer, M. Sebag, Analysis of adaptive operator selection techniques on the royal road and long...
  • S. García et al.

    A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the CEC’2005 special session on real parameter optimization

    J. Heuristics

    (2009)
  • D.E. Goldberg

    Probability matching, the magnitude of reinforcement, and classifier system bidding

    Mach. Learn.

    (1990)
  • W. Gong et al.

    Enhanced differential evolution with adaptive strategies for numerical optimization

    IEEE Trans. Systems Man Cybernet. Part B – Cybernet.

    (2011)
  • W. Gong, A. Fialho, Z. Cai, Adaptive strategy selection in differential evolution, in: J. Branke (Ed.), Genetic and...
  • F. Herrera, M. Lozano, D. Molina, Special issue on the scalability of evolutionary algorithms and other metaheuristics...
  • C.-Y. Lee et al.

    Evolutionary programming using mutations based on the Lévy probability distribution

    IEEE Trans. Evol. Comput.

    (2004)
  • Y.-W. Leung et al.

    An orthogonal genetic algorithm with quantization for global numerical optimization

    IEEE Trans. Evol. Comput.

    (2001)
  • R. Mallipeddi, P.N. Suganthan, Differential evolution algorithm with ensemble of parameters and mutation and crossover...
  • Cited by (153)

    • Choice of benchmark optimization problems does matter

      2023, Swarm and Evolutionary Computation
    View all citing articles on Scopus

    This work was partly supported by the Fundamental Research Funds for the Central Universities at China University of Geosciences (Wuhan) under Grant No. CUG100316, the Foundation of State Key Lab of Software Engineering under Grant No. SKLSE2010-08-13, the National Natural Science Foundation of China under Grant No. 61075063, and the Research Fund for the Doctoral Program of Higher Education under Grant No. 20090145110007.

    View full text