Adaptive strategy selection in differential evolution for numerical optimization: An empirical study☆
Introduction
Differential evolution (DE), proposed by Storn and Price [35], is an efficient and versatile population-based direct search algorithm that implements the evolutionary generation-and-test paradigm for global optimization, using the distance and direction informations from the current population to guide the search. Among its advantages are its simple structure, ease of use, speed, and robustness, which enables its application on many real-world applications, such as data mining, IIR design, neural network training [29], power systems [43], financial market dynamics modeling [16], data mining [4], and so on. A good survey of DE can be found in [5], where its basic concepts and major variants, as well as some theoretical studies and application examples to complex environments, are reviewed in detail.
In the seminal DE algorithm [35], a single mutation strategy was used for the generation of new solutions; later on, Price and Storn suggested nine other different strategies [29], [36]. In addition, other mutation strategies are also proposed in the DE literature [50], [3], [6], [8]. Although augmenting the robustness of the underlying algorithm, these many available strategies led the user to the need of defining which of them would be most suitable for the problem at hand – a difficult and crucial task for the performance of DE [31], [30], [23].
Off-line tuning techniques, such as the F-Race [1], could be used to choose the mutation strategy to be used. However, besides being computationally expensive, such techniques usually output a static setting; while, in practice, the performance of each mutation strategy does not depend on the problem itself, but rather on the characteristics of the region of the search landscape being explored by the population at each generation. Based on this, thus, in order to be more efficient, the autonomous selection of the strategy to be used should be done in a continuous way, while solving the problem, i.e., dynamically adapting itself as the search goes on.
In order to contribute on remedying this drawback, in this paper, we extend our recent work [15] on the use of adaptive strategy selection within DE for global numerical optimization. To do adaptive strategy selection, i.e., to be able to automatically select which is the best mutation strategy for the generation of each offspring while solving the problem, two elements need to be defined [48], [18]: (i) how to select between the available strategies based on their recent performance (strategy selection); and (ii) how to measure the performance of the strategies after their application, and consequently update the empirical quality estimates kept for each of them (credit assignment). In this work, two strategy selection techniques, namely Probability Matching [12] and Adaptive Pursuit [41], are independently analyzed in combination with each of four credit assignment techniques based on the relative fitness improvement. In addition, a parameter sensitivity analysis is conducted to investigate the impact of the hyper-parameters on the performance of the resulting adaptive strategy selection technique. Experiments have been conducted on 22 widely used benchmark problems, including nine test functions presented in CEC-05 [37]. The results indicate that the analyzed approach is able to select the most suitable strategy, while solving a problem at hand. Compared with other state-of-the-art DE variants, better results are obtained on most of the functions in terms of quality of final solutions and convergence speed.
Compared with our previous work in [15], the main contributions of this paper are twofold: (i) in order to pursuit the most suitable strategy at different search stages for a specific problem more rapidly, the Adaptive Pursuit technique is used and its performance is compared with the Probability Matching-based DE variant; and (ii) the comprehensive experiments are conducted to verify our approach and its performance is analyzed in detail.
The remainder of the paper is organized as follows. Section 2 briefly introduces the background and related work of this paper. In Section 3, we describe the adaptive strategy selection approaches in detail, followed by the experimental results and discussions in Section 4. Finally, Section 5 is devoted to conclusions and future work.
Section snippets
Problem formulation
Without loss of generality, in this work, we consider the following numerical optimization problem:where is a compact set, x = [x1, x2, … , xD]T, and D is the dimension, i.e., the number of decision variables. Generally, for each variable xj, it satisfies a boundary constraint, such that:
Differential evolution
DE [35] is a simple evolutionary algorithm (EA) for global numerical optimization. It creates new candidate solutions by combining the parent individual and several other
Adaptive strategy selection in DE
In order to automatically select the most suitable strategy while solving a problem without any prior knowledge, in this work, we analyze the use of strategy adaptation methods in DE for numerical optimization problems. This is an extension of our recent work in [15], which has been considerably enhanced, with the major differences being listed as follows:
- •
In this work, two strategy selection techniques, namely Probability Matching (PM) [12] and Adaptive Pursuit (AP) [41], are analyzed and
Experimental results
In order to evaluate the performance of our approach, 22 benchmark functions were selected as the test suit. Functions f01–f13 are chosen from Yao et al. [48]. Functions f01–f04 are unimodal. The Rosenbrock’s function f05 is a multi-modal function when D > 3 [33]. Function f06 is the step function, which has one minimum and is discontinuous. Function f07 is a noisy quartic function. Functions f08–f13 are multi-modal functions where the number of local minima increases exponentially with the
Conclusions and future work
Many mutation strategies have been proposed for generating new solutions within DE in different ways. Although allowing a very wide use of DE on many different fields of application, this number of available strategies creates an extra difficulty to the user: it is not trivial to define which strategy should be used on a given problem in order to achieve good performance. Besides, the strategies are not simply problem-dependent; indeed, their performance tends to vary as the search goes on,
Acknowledgments
The authors would like to sincerely thank the Editor and the anonymous reviewers for their constructive comments, which improved the original paper significantly.
References (50)
- et al.
Kernel-induced fuzzy clustering of image pixels with an improved differential evolution algorithm
Inform. Sci.
(2010) - et al.
Enhancing the performance of differential evolution using orthogonal design method
Appl. Math. Comput.
(2008) - et al.
A fuzzy logic control using a differential evolution algorithm aimed at modelling the financial market dynamics
Inform. Sci.
(2011) - et al.
Differential evolution algorithm with ensemble of parameters and mutation strategies
Appl. Soft Comput.
(2011) - et al.
Differential evolution in constrained numerical optimization: An empirical study
Inform. Sci.
(2010) - et al.
A differential evolution algorithm with self-adapting strategy and control parameters
Comput. Oper. Res.
(2011) - et al.
Estimation of distribution and differential evolution cooperation for large scale economic load dispatch optimization of power systems
Inform. Sci.
(2010) - et al.
Large scale evolutionary optimization using cooperative coevolution
Inform. Sci.
(2008) - M. Birattari, T. Stützle, L. Paquete, K. Varrentrapp, A racing algorithm for configuring metaheuristics, in: W.B....
- et al.
Performance comparison of self-adaptive and adaptive differential evolution algorithms
Soft Comput.
(2007)
Differential evolution using a neighborhood-based mutation operator
IEEE Trans. Evol. Comput.
Differential evolution: a survey of the state-of-the-art
IEEE Trans. Evol. Comput.
Improving classical and decentralized differential evolution with new mutation operator and population topologies
IEEE Trans. Evol. Comput.
Parameter control in evolutionary algorithms
Enhancing differential evolution utilizing proximity-based mutation operators
IEEE Trans. Evol. Comput.
Orthogonal and Uniform Design
A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the CEC’2005 special session on real parameter optimization
J. Heuristics
Probability matching, the magnitude of reinforcement, and classifier system bidding
Mach. Learn.
Enhanced differential evolution with adaptive strategies for numerical optimization
IEEE Trans. Systems Man Cybernet. Part B – Cybernet.
Evolutionary programming using mutations based on the Lévy probability distribution
IEEE Trans. Evol. Comput.
An orthogonal genetic algorithm with quantization for global numerical optimization
IEEE Trans. Evol. Comput.
Cited by (153)
Choice of benchmark optimization problems does matter
2023, Swarm and Evolutionary ComputationDifferential evolution for population diversity mechanism based on covariance matrix
2023, ISA TransactionsAn adaptive parallel evolutionary algorithm for solving the uncapacitated facility location problem
2023, Expert Systems with ApplicationsDifferential evolution with objective and dimension knowledge utilization
2023, Swarm and Evolutionary ComputationMultitasking optimization via an adaptive solver multitasking evolutionary framework
2023, Information Sciences
- ☆
This work was partly supported by the Fundamental Research Funds for the Central Universities at China University of Geosciences (Wuhan) under Grant No. CUG100316, the Foundation of State Key Lab of Software Engineering under Grant No. SKLSE2010-08-13, the National Natural Science Foundation of China under Grant No. 61075063, and the Research Fund for the Doctoral Program of Higher Education under Grant No. 20090145110007.