Elsevier

Information Sciences

Volumes 433–434, April 2018, Pages 464-509
Information Sciences

Social learning differential evolution

https://doi.org/10.1016/j.ins.2016.10.003Get rights and content

Abstract

Differential evolution (DE) has attracted much attention in the field of evolutionary computation and has proved to be one of the most successful evolutionary algorithms (EAs) for global optimization. Mutation, as the core operator of DE, is essential for guiding the search of DE. In this study, inspired by the phenomenon of social learning in animal societies, we propose an adaptive social learning (ASL) strategy for DE to extract the neighborhood relationship information of individuals in the current population. The new DE framework is named social learning DE (SL-DE). Unlike the classical DE algorithms where the parents in mutation are randomly selected from the current population, SL-DE uses the ASL strategy to intelligently guide the selection of parents. With ASL, each individual is only allowed to interact with its neighbors and the parents in mutation will be selected from its neighboring solutions. To evaluate the effectiveness of the proposed framework, SL-DE is applied to several classical and advanced DE algorithms. The simulation results on forty-three real-parameter functions and seventeen real-world application problems have demonstrated the advantages of SL-DE over several representative DE variants and the state-of-the-art EAs.

Introduction

Evolutionary algorithms (EAs) are stochastic optimization techniques that mimic the evolutionary process of nature. The common conceptual base of EAs is to evolve a population of candidate solutions with the help of information exchange procedures. In the last few decades, numerous EAs have been proposed based on different inspirations taken from the evolutionary process of nature. These include genetic algorithm (GA), evolution strategy (ES) , evolutionary programming (EP), particle swarm optimization (PSO), and ant colony optimization (ACO). The major differences among these EAs lie in the way new trial solutions are generated. Meanwhile, the question of how to utilize the population information to further enhance the reproduction operator’s search ability is still one of the most salient and active topics in EAs.

Differential evolution (DE), proposed by Storn and Price [39], is a simple yet efficient EA for global numerical optimization. Due to its attractive characteristics, such as ease of use, compact structure, robustness and speediness, DE has been extended to handle large-scale, multi-objective, constrained, dynamic, and uncertain optimization problems [11]. Furthermore, DE has been successfully applied to many scientific and engineering fields [11], such as pattern recognition, signal processing, satellite communications, wireless sensor networks, and so on.

In DE, three main operators, i.e., mutation, crossover and selection, are used to evolve the population. Among them, mutation is the core operator that distinguishes DE from other EAs. However, we have observed, in most DE algorithms, the parents for mutation are selected randomly from the current population, and thus, all vectors are likely to be selected equally as parents without any selective pressure at all. Although this mutation strategy is easy to use and may be good at exploring the search space, it is slow to exploit solutions. In addition, the need for parent selection in DE has been advocated in [4], [13], [20], [40], [49]. In these work, the selection of parents for mutation has been proven to be very important to the performance of DE when solving complex problems.

Social learning, which is widely observed in animal societies, refers to the learning that is affected by the interaction with, or observation of another animal or its products [22]. As opposed to individual learning, where only a single person’s learning is considered, the goal of social learning is to learn and imitate the behaviors of better people within a social group [22]. In social learning, the majority of studies focus on how the individuals within the group learn and, hence, how the entire group learns. Many mechanisms of social learning have been proposed in the literature, and they can be roughly classified into the following categories: local enhancement, stimulus enhancement, observational conditioning, matched-dependent behaviors and imitation [22]. In several EAs, these social learning mechanisms have been successfully introduced to improve their performance. In [30], an incremental social learning framework is proposed for the PSO variants with a growing population of learning agents. In [36], a social learning PSO is proposed by introducing the learning strategy each particle can learn from any better particles in the current swarm. In [27], a social learning optimization algorithm is presented that consists of three co-evolution spaces: micro, learning and belief space.

Inspired by the imitation phenomenon of social learning where people usually learn and imitate the behavior of a better person (or elite) within a social group, this study proposes an adaptive social learning (ASL) strategy for DE to develop a new DE framework, named social learning DE (SL-DE). Unlike the classical DE algorithms, where the parents in mutation are randomly selected from the current population, SL-DE uses the ASL strategy to extract neighborhood relationship information of the population to guide the selection of parents in mutation. ASL consists of four operators: in the social ranking operator, individuals in the current population are sorted according to their fitness values; in the evaluating social influence operator, the social influence of each individual is evaluated based on its ranking value; in the building social network operator, a new social network is built by establishing the relationships between pairs of individuals according to their social influences; in the constructing neighborhood operator, the neighborhood of each individual is constructed from the built social network. With ASL, each individual is only allowed to interact with its neighbors and the parents in mutation will be selected within its neighborhood. In this way, the neighborhood relationship information can be utilized effectively to guide the search of DE.

To evaluate the effectiveness of the proposed approach, we apply SL-DE to several classical DE algorithms, as well as advanced DE variants. Extensive experiments have been carried out on a set of benchmark functions from the 2013 IEEE congress on evolutionary computation (CEC) (including real-parameter optimization [25] and large-scale global optimization [24]) and the CEC 2011 on real-world application problems [10]. Simulation results have shown the advantages of SL-DE when compared with other algorithms on the test functions.

In summary, the major characteristics of SL-DE include the following:

  • ASL is proposed to extract neighborhood relationship information of individuals during the evolutionary process, which shows some insights into utilizing population information with the social learning mechanism.

  • In SL-DE, each individual is only allowed to interact with its neighbors and the parents in mutation will be selected from the neighborhood, which provides an alternative for selecting parents in the mutation operator of DE.

  • Because the simple structure of the classical DE algorithm has been maintained, SL-DE is still very simple and can be easily applied to most advanced DE variants to further improve their performance.

The rest of this paper is organized as follows. Section 2 briefly reviews some related work. The proposed SL-DE is presented in detail in Section 3. Section 4 reports the extensive experimental results. Finally, the conclusions are drawn in Section 5.

Section snippets

DE

In this study, DE is used for solving the numerical optimization problem [39]. Without loss of generality, we consider the optimization problem to be minimized as f(X), XRD, where D is the dimension of the decision variables. DE evolves a population of NP vectors representing the candidate solutions. Each vector is denoted as Xi,G=(xi,G1,xi,G2,,xi,GD), where i=1,2,,NP, NP is the population size and G is the current generation. In the classical DE algorithms, the algorithmic schemes can be

Motivations

In most DE algorithms, vectors for mutation are equally selected as parents without any selective pressure. Due to a high degree of randomness, such a mutation strategy will cause DE to be slow to exploit solutions and to be inefficient when searching in complex problem spaces. As reviewed in Section 2.2, many approaches have been developed to deal with this problem by utilizing the population information. It is clear that these attempts work well for improving the performance of DE. However,

Experimental result

In this section, extensive experiments are carried out to evaluate the performance of SL-DE. A test suite of the benchmark functions is used, including the CEC2013 special sessions on real-parameter optimization [25] and large-scale global optimization [24], and the CEC2011 on real-world application problems [10]. Detail definitions can be found in [24], [25] and [10], respectively.

The experiments can be divided into eight parts:

(1) Sections 4.2 and 4.3 investigate what benefits can be obtained

Conclusion and future research

Inspired by the imitation phenomenon of social learning in animal societies, an adaptive social learning (ASL) strategy is proposed, and a new DE framework named social learning DE (SL-DE) is developed by introducing ASL into DE. Unlike the classical DE algorithms, SL-DE extracts neighborhood relationship information of individuals in the current population to guide the selection of parents.

Extensive experiments have been carried out to evaluate the effectiveness of SL-DE by comparing it with

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (61305085, 61572206, 61502184, 61572204), the Natural Science Foundation of Fujian Province of China (2014J05074, 2015J0101), the Promotion Program for Young and Middle-aged Teacher in Science and Technology Research of Huaqiao University (ZQN-PY410).

References (50)

  • Y. Cai et al.

    Learning-enhanced differential evolution for numerical optimization

    Soft Comput.

    (2012)
  • F. Caraffini et al.

    A cma-es super-fit scheme for the re-sampled inheritance search

    Evolutionary Computation (CEC), 2013 IEEE Congress on

    (2013)
  • F. Caraffini et al.

    Super-fit multicriteria adaptive differential evolution

    Evolutionary Computation (CEC), 2013 IEEE Congress on

    (2013)
  • S. Das et al.

    Differential evolution using a neighborhood-based mutation operator

    Evol. Comput., IEEE Trans. on

    (2009)
  • S. Das et al.

    Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems

    (2010)
  • S. Das et al.

    Differential evolution: a survey of the state-of-the-art

    IEEE Trans. Evol. Comput.

    (2011)
  • B. Dorronsoro et al.

    Improving classical and decentralized differential evolution with new mutation operator and population topologies

    Evol. Comput., IEEE Trans.

    (2011)
  • B. Dorronsoro et al.

    Study of different small-world topology generation mechanisms for genetic algorithms

    Evolutionary Computation

    (2012)
  • L. dos Santos Coelho et al.

    Population’s variance-based adaptive differential evolution for real parameter optimization

    Evolutionary Computation (CEC), 2013 IEEE Congress on

    (2013)
  • M. El-Abd

    Testing a particle swarm optimization and artificial bee colony hybrid algorithm on the cec13 benchmarks

    Evolutionary Computation (CEC), 2013 IEEE Congress on

    (2013)
  • S.M. Elsayed et al.

    A genetic algorithm for solving the cec’2013 competition problems on real-parameter optimization

    Evolutionary Computation (CEC), 2013 IEEE Congress on

    (2013)
  • S.M. Elsayed et al.

    Differential evolution with automatic parameter configuration for solving the cec2013 competition on real-parameter optimization

    Evolutionary Computation (CEC), 2013 IEEE Congress on

    (2013)
  • M.G. Epitropakis et al.

    Enhancing differential evolution utilizing proximity-based mutation operators

    Evol. Comput., IEEE Trans.

    (2011)
  • W. Gong et al.

    Differential evolution with ranking-bbsed mutation operators

    IEEE Trans. on Cybern

    (2013)
  • S. Guo et al.

    Improving differential evolution with successful-parent-selecting framework

    IEEE Trans. Evol. Comput.

    (2015)
  • Cited by (0)

    View full text