Dynamic multi-swarm differential learning particle swarm optimizer

https://doi.org/10.1016/j.swevo.2017.10.004Get rights and content

Abstract

Because different optimization algorithms have different search behaviors and advantages, hybrid strategy is one of the main research directions to improve the performance of PSO. Inspired by this idea, a dynamic multi-swarm differential learning particle swarm optimizer (DMSDL-PSO) is proposed in this paper. We propose a novel method to merge the differential evolution operator into each sub-swarm of the DMSDL-PSO. Combining the exploration capability of the differential mutation and employing Quasi-Newton method as a local searcher to enhance the exploitation capability, DMSDL-PSO has a good exploration and exploitation capability. According to the characteristics of the DMSDL-PSO, three modified differential mutation operators are discussed. Differential mutation is adopted for the personal historically best particle. Because the velocity updating equation of the particles in PSO has some shortcomings, a modified velocity updating equation is adopted in DMSDL-PSO. In DMSDL-PSO, in which the particles are divided into several small and dynamic sub-swarms. The dynamic change of sub-swarms can promote the information exchange of the whole swarm. In order to test the performance of DMSDL-PSO, 41 benchmark functions are adopted. Lots of numerical experiments are conducted to compare DMSDL-PSO with other popular algorithms. The numerical results demonstrate that DMSDL-PSO performs better on some benchmark functions.

Introduction

With the development of the studies about self-organized behaviors in social insects, inspired by these biological phenomena, during the past two decades years, many swarm intelligence algorithms have emerged. These algorithms include cuckoo search algorithm [1], pigeon-inspired optimization [2], ant colony optimization [3], etc. In the real world, many optimization problems are often non-differentiable, NP-hard and difficult non-linear problems. The traditional techniques can not solve these optimization problems effectively. Swarm intelligence algorithms are proving to be better than the traditional techniques and thus are widely used [4]. For this reason, the research on swarm intelligence algorithms is a hot topic in computer science.

Inspired by the foraging behaviors of bird swarm, in 1995, Kenndey and Eberhart proposed particle swarm optimization (PSO) [5]. In PSO, in order to imitate the flight behavior of bird, two simple equations of motion are designed to guide the particles to search for globally optimal solution. As a population based iterative algorithm, the concept of PSO is simple, and PSO is easy to be implemented. In solving real-world engineering optimization problems, PSO has been successfully applied into diverse fields [6], [7], [8], [9]. Owing to these reasons, PSO has been one of the most popular and well-known algorithms in swarm intelligence [10].

However, many studies have shown that PSO was easy to get stuck at a local optimal solution region and had a slow convergence in solving the complex multimodal functions. In order to overcome these disadvantages, it is important that PSO achieves a good balance between exploration and exploitation. Lots of PSO variants have been proposed. These methods include tuning strategy of the parameters [11], [12], [13], [14], [15], changes of update equations and hybridization with other mechanisms [16], [17], [18], [19], [20] and multi-swarm technique.

During the last decade, multi-swarm technique has attracted increasing attention. Obviously, the information exchange among sub-swarms can better maintain the diversity of the population. So, this technology can greatly improve the performance of PSO. A number of PSO variants based on multi-swarm have been proposed [21], [22], [23], [24], [25], [26], [27], [28]. In 2007, Niu et al. proposed a multi-swarm cooperative particle swarm optimizer (MCPSO). A master-slave model was adopted in MCPSO. Furthermore, a multi-swarm cooperative particle swarm optimizer is proposed with a new fuzzy modeling strategy [29]. Zhang and Ding proposed a multi-swarm self-adaptive and cooperative particle swarm optimization (MSCPSO) [21]. MSCPSO used several strategies to avoid falling into local optimum and improve the diversity. Tatsumi et al. proposed chaotic multi-swarm particle swarm optimization using combined quartic functions (CMPSO-CQ) [30]. In CMPSO-CQ, a perturbation-based chaotic system is used to update a particle's position. Numerical experiments demonstrated that the CMPSO-CQ had the good performance. Lots of dynamic multi-swarm particle swarm optimizers are presented [31], [32], [33], [34], [35], [36], [37], [38]. A dynamic multi-swarm particle swarm optimizer (DMS-PSO) was proposed in 2005 [31]. Small swarm sizes were used and swarm topologies were dynamically changed in DMS-PSO. A regroup period parameter was used to control the information exchange among the sub-swarms. Furthermore, some improved versions of DMS-PSO were proposed. In order to improve local searching ability, in Ref. [34], the Quasi-Newton method was merged into DMS-PSO. In Ref. [35], DMS-PSO was combined with the harmony search algorithm to avoid all particles getting trapped in local optimal regions. The results of the tests showed that it had better performance on some multimodal and unimodal functions. Xu et al. proposed dynamic multi-swarm particle swarm optimizer with cooperative learning strategy (DMS-PSO-CLS) [37]. In DMS-PSO-CLS, for each sub-swarm, two worst particles can learn from the better particle of two randomly selected sub-swarms. This cooperative learning strategy made the information be used more effectively to generate better quality solutions. Liang et al. presented a self-adaptive dynamic particle swarm optimizer (SaDPSO) [36]. In this method, a self-adaptive strategy of parameters and information sharing mechanism for the best parameters were embedded.

Differential evolution (DE) is another popular, efficient and famous evolutionary algorithms (EAs) [39]. Owing to its remarkable performance, DE has been widely applied to lots of real-world applications [40], [41], [42], [43], [44]. There are three differential evolution operators: mutation, crossover, and selection. In order to improve DE's performance, different mutation operators were proposed [45], [46]. DE mutation can improve the diversity of the swarm [47]. However, due to the guidance of the global best information, the PSO is easy to be trapped into the in local optima. Obviously, DE can improve the performance of PSO, and hybrid studies for PSO and DE are still relatively insufficient.

Motivated by those observations, we propose a novel PSO variant, namely, DMSDL-PSO. Because multi-swarm has many advantages, this technique is adopted in DMSDL-PSO. Some research results showed that the velocity update equation of PSO restricted the exploration capability of the algorithm [48]. To overcome this shortcoming, the proposed DMSDL-PSO adopts a modified velocity update equation with learning model that is very popular in recent decade years. DE utilizes the particle's historical information to generate high-quality exemplars for each sub-swarm. These good examples can effectively provide a much better guidance for the particles to fly to a promising region faster. DMSDL-PSO has a very simple multi-swarm and two-layer structure for information exchange. The results of the tests also confirm that DMSDL-PSO has good performance for some benchmark functions.

The remainder of this paper is organized as follows. In Section 2, PSO, DMS-PSO and DE algorithms are briefly introduced. In Section 3, DMSDL-PSO is introduced in details. In Section 4, the parameter setting about DMSDL-PSO is investigated. The results of the tests are discussed and analyzed between several state-of-the-art algorithms and DMSDL-PSO. In Section 5, the experimental results on two real-world optimization problems are presented and discussed. In Section 6, the conclusions and the outlook of this work are given.

Section snippets

Particle swarm optimization

In PSO [49], a particle is used to represent the potential solution of the optimization problem. The whole particle swarm flies in the search space to search for the global optimum. Here, the parameter N is used to represent the number of particles. In a D-dimensional hyperspace, the velocity vector of the ith particle (i[1,N]) is represented as Vi = (vi1, vi2,, viD), where vid[vmin,vmax] and d[1,D]. The position vector of the ith particle is Xi = (xi1, xi2,, xiD), where xid[xmin,xmax]. x

Dynamic multi-swarm differential learning particle swarm optimizer (DMSDL-PSO)

DMS-PSO based on the Quasi-Newton method is good at local exploitation. However, the ability of global exploration of the algorithm is not good. Numerical results show that DMS-PSO does show a disadvantage on some multimodal problems [50]. The results of the research indicate that the velocity update equation of PSO may make the particle oscillates in some cases or may make the particle trap into a local optimum in other cases, which causes premature convergence [48]. In order to solve these

Experimental study

This section firstly presents the benchmark functions and parameters setting for the following experiments. Then the investigation of parameters about DMSDL-PSO are provided in detail. Extensive experiments are subsequently conducted with several well-known algorithms based on 41 classical benchmarks. Experiment environment: MATLAB2012a; Win 7; Intel Pentium G3250 CPU; 3.2 GHZ; 8 GB RAM.

Application to two real-world problems

In this part, the DMSDL-PSO is applied to solve the practical problems to further test its performance. We apply DMSDL-PSO to parameter estimation for chaotic systems and Frequency-Modulated (FM) sound waves.

As a typical chaotic system, Lorenz system is employed. Dynamic equations of Lorenz system are represented as:ẋ=δ(yx),ẏ=ρxxzy,ż=xyβz.Here, the values of the parameters used in Lorenz systems are δ = 10, ρ = 28, β = 3/2. In this simulation, the Lorenz system firstly evolves freely

Conclusion

In this paper, a differential learning particle swarm optimization algorithm has been proposed. In DMSDL-PSO, three different differential operators are adopted to construct better guiding vectors. These guiding vectors are able to guide particles to make efficient search. During the evolution process, information can be exchanged between different sub-swarms. Each sub-swarm adopts the same differential learning strategy. By making full use of these mechanisms, DMSDL-PSO has good searching

Acknowledgements

The authors thank all the Editor and the anonymous referees for their constructive comments and valuable suggestions, which are helpful to improve the quality of this paper. This paper is supported by the National Key Research and Development Program of China (Grant nos. 2016YFB0800602), the National Natural Science Foundation of China (Grant nos. 61370221, 61771071 and 61573067), the Plan For Scientific Innovation Talent of Henan Province (No. 164200510007) and the Program for Innovative

References (58)

  • M. Tanweer et al.

    Directionally driven self-regulating particle swarm optimization algorithm

    Swarm Evol. Comput.

    (2016)
  • A. Yadav et al.

    Gravitational swarm optimizer for global optimization

    Swarm Evol. Comput.

    (2016)
  • H. Samma et al.

    A new reinforcement learning-based memetic particle swarm optimizer

    Appl. Soft Comput.

    (2016)
  • R. Jensi et al.

    An enhanced particle swarm optimization with levy flight for global optimization

    Appl. Soft Comput.

    (2016)
  • M. Pluhacek et al.

    Chaos particle swarm optimization with eensemble of chaotic systems

    Swarm Evol. Comput.

    (2015)
  • J. Zhang et al.

    A multi-swarm self-adaptive and cooperative particle swarm optimization

    Eng. Appl. Artif. Intell.

    (2011)
  • B. Niu et al.

    Mcpso: a multi-swarm cooperative particle swarm optimizer

    Appl. Math. Comput.

    (2007)
  • S. Mukhopadhyay et al.

    Global optimization of an optical chaotic system by chaotic multi swarm particle swarm optimization

    Expert Syst. Appl.

    (2012)
  • Ş. Gülcü et al.

    A novel parallel multi-swarm algorithm based on comprehensive learning particle swarm optimization

    Eng. Appl. Artif. Intell.

    (2015)
  • B. Niu et al.

    A multi-swarm optimizer based fuzzy modeling approach for dynamic systems processing

    Neurocomputing

    (2008)
  • S.-Z. Zhao et al.

    Dynamic multi-swarm particle swarm optimizer with harmony search

    Expert Syst. Appl.

    (2011)
  • X. Xu et al.

    Dynamic multi-swarm particle swarm optimizer with cooperative learning strategy

    Appl. Soft Comput.

    (2015)
  • S. Das et al.

    Recent advances in differential evolution–an updated survey

    Swarm Evol. Comput.

    (2016)
  • W. Shao et al.

    A self-guided differential evolution with neighborhood search for permutation flow shop scheduling

    Expert Syst. Appl.

    (2016)
  • A. Zamuda et al.

    Constrained differential evolution optimization for underwater glider path planning in sub-mesoscale eddy sampling

    Appl. Soft Comput.

    (2016)
  • M. Ghasemi et al.

    Colonial competitive differential evolution: an experimental study for optimal economic load dispatch

    Appl. Soft Comput.

    (2016)
  • H. Duan et al.

    Pigeon-inspired optimization: a new swarm intelligence optimizer for air robot path planning

    Int. J. Intell. Comput. Cybern.

    (2014)
  • M. Drigo et al.

    The ant system: optimization by a colony of cooperation agents

    IEEE Trans. Syst. Man Cybern. Part B

    (1996)
  • R.C. Eberhart et al.

    A new optimizer using particle swarm theory

  • Cited by (0)

    View full text