Social learning differential evolution
Introduction
Evolutionary algorithms (EAs) are stochastic optimization techniques that mimic the evolutionary process of nature. The common conceptual base of EAs is to evolve a population of candidate solutions with the help of information exchange procedures. In the last few decades, numerous EAs have been proposed based on different inspirations taken from the evolutionary process of nature. These include genetic algorithm (GA), evolution strategy (ES) , evolutionary programming (EP), particle swarm optimization (PSO), and ant colony optimization (ACO). The major differences among these EAs lie in the way new trial solutions are generated. Meanwhile, the question of how to utilize the population information to further enhance the reproduction operator’s search ability is still one of the most salient and active topics in EAs.
Differential evolution (DE), proposed by Storn and Price [39], is a simple yet efficient EA for global numerical optimization. Due to its attractive characteristics, such as ease of use, compact structure, robustness and speediness, DE has been extended to handle large-scale, multi-objective, constrained, dynamic, and uncertain optimization problems [11]. Furthermore, DE has been successfully applied to many scientific and engineering fields [11], such as pattern recognition, signal processing, satellite communications, wireless sensor networks, and so on.
In DE, three main operators, i.e., mutation, crossover and selection, are used to evolve the population. Among them, mutation is the core operator that distinguishes DE from other EAs. However, we have observed, in most DE algorithms, the parents for mutation are selected randomly from the current population, and thus, all vectors are likely to be selected equally as parents without any selective pressure at all. Although this mutation strategy is easy to use and may be good at exploring the search space, it is slow to exploit solutions. In addition, the need for parent selection in DE has been advocated in [4], [13], [20], [40], [49]. In these work, the selection of parents for mutation has been proven to be very important to the performance of DE when solving complex problems.
Social learning, which is widely observed in animal societies, refers to the learning that is affected by the interaction with, or observation of another animal or its products [22]. As opposed to individual learning, where only a single person’s learning is considered, the goal of social learning is to learn and imitate the behaviors of better people within a social group [22]. In social learning, the majority of studies focus on how the individuals within the group learn and, hence, how the entire group learns. Many mechanisms of social learning have been proposed in the literature, and they can be roughly classified into the following categories: local enhancement, stimulus enhancement, observational conditioning, matched-dependent behaviors and imitation [22]. In several EAs, these social learning mechanisms have been successfully introduced to improve their performance. In [30], an incremental social learning framework is proposed for the PSO variants with a growing population of learning agents. In [36], a social learning PSO is proposed by introducing the learning strategy each particle can learn from any better particles in the current swarm. In [27], a social learning optimization algorithm is presented that consists of three co-evolution spaces: micro, learning and belief space.
Inspired by the imitation phenomenon of social learning where people usually learn and imitate the behavior of a better person (or elite) within a social group, this study proposes an adaptive social learning (ASL) strategy for DE to develop a new DE framework, named social learning DE (SL-DE). Unlike the classical DE algorithms, where the parents in mutation are randomly selected from the current population, SL-DE uses the ASL strategy to extract neighborhood relationship information of the population to guide the selection of parents in mutation. ASL consists of four operators: in the social ranking operator, individuals in the current population are sorted according to their fitness values; in the evaluating social influence operator, the social influence of each individual is evaluated based on its ranking value; in the building social network operator, a new social network is built by establishing the relationships between pairs of individuals according to their social influences; in the constructing neighborhood operator, the neighborhood of each individual is constructed from the built social network. With ASL, each individual is only allowed to interact with its neighbors and the parents in mutation will be selected within its neighborhood. In this way, the neighborhood relationship information can be utilized effectively to guide the search of DE.
To evaluate the effectiveness of the proposed approach, we apply SL-DE to several classical DE algorithms, as well as advanced DE variants. Extensive experiments have been carried out on a set of benchmark functions from the 2013 IEEE congress on evolutionary computation (CEC) (including real-parameter optimization [25] and large-scale global optimization [24]) and the CEC 2011 on real-world application problems [10]. Simulation results have shown the advantages of SL-DE when compared with other algorithms on the test functions.
In summary, the major characteristics of SL-DE include the following:
- •
ASL is proposed to extract neighborhood relationship information of individuals during the evolutionary process, which shows some insights into utilizing population information with the social learning mechanism.
- •
In SL-DE, each individual is only allowed to interact with its neighbors and the parents in mutation will be selected from the neighborhood, which provides an alternative for selecting parents in the mutation operator of DE.
- •
Because the simple structure of the classical DE algorithm has been maintained, SL-DE is still very simple and can be easily applied to most advanced DE variants to further improve their performance.
The rest of this paper is organized as follows. Section 2 briefly reviews some related work. The proposed SL-DE is presented in detail in Section 3. Section 4 reports the extensive experimental results. Finally, the conclusions are drawn in Section 5.
Section snippets
DE
In this study, DE is used for solving the numerical optimization problem [39]. Without loss of generality, we consider the optimization problem to be minimized as f(X), X ∈ RD, where D is the dimension of the decision variables. DE evolves a population of NP vectors representing the candidate solutions. Each vector is denoted as where NP is the population size and G is the current generation. In the classical DE algorithms, the algorithmic schemes can be
Motivations
In most DE algorithms, vectors for mutation are equally selected as parents without any selective pressure. Due to a high degree of randomness, such a mutation strategy will cause DE to be slow to exploit solutions and to be inefficient when searching in complex problem spaces. As reviewed in Section 2.2, many approaches have been developed to deal with this problem by utilizing the population information. It is clear that these attempts work well for improving the performance of DE. However,
Experimental result
In this section, extensive experiments are carried out to evaluate the performance of SL-DE. A test suite of the benchmark functions is used, including the CEC2013 special sessions on real-parameter optimization [25] and large-scale global optimization [24], and the CEC2011 on real-world application problems [10]. Detail definitions can be found in [24], [25] and [10], respectively.
The experiments can be divided into eight parts:
(1) Sections 4.2 and 4.3 investigate what benefits can be obtained
Conclusion and future research
Inspired by the imitation phenomenon of social learning in animal societies, an adaptive social learning (ASL) strategy is proposed, and a new DE framework named social learning DE (SL-DE) is developed by introducing ASL into DE. Unlike the classical DE algorithms, SL-DE extracts neighborhood relationship information of individuals in the current population to guide the selection of parents.
Extensive experiments have been carried out to evaluate the effectiveness of SL-DE by comparing it with
Acknowledgements
This work was supported in part by the National Natural Science Foundation of China (61305085, 61572206, 61502184, 61572204), the Natural Science Foundation of Fujian Province of China (2014J05074, 2015J0101), the Promotion Program for Young and Middle-aged Teacher in Science and Technology Research of Huaqiao University (ZQN-PY410).
References (50)
- et al.
Recent advances in differential evolution – an updated survey
Swarm Evol. Comput.
(2016) - et al.
A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms
Swarm Evol. Comput.
(2011) Cma-es with restarts for solving cec 2013 benchmark problems
Evolutionary Computation (CEC), 2013 IEEE Congress on
(2013)- et al.
Differential evolution with concurrent fitness based local search
Evolutionary Computation (CEC), 2013 IEEE Congress on
(2013) Primate culture and social learning
Cognit. Sci.
(2000)- et al.
Structured population size reduction differential evolution with multiple mutation strategies on cec 2013 real parameter optimization
Evolutionary Computation (CEC), 2013 IEEE Congress on
(2013) - et al.
Teaching and learning best differential evoltuion with self adaptation for real parameter optimization
Evolutionary Computation (CEC), 2013 IEEE Congress on
(2013) - et al.
Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems
IEEE Trans. Evolut. Comput.
(2006) - et al.
Improving differential evolution with a new selection method of parents for mutation
Front. Comput. Sci.
(2016) - et al.
Differential evolution with neighborhood and direction information for numerical optimization
IEEE Trans. Cybern.
(2013)