Dynamic multi-swarm differential learning particle swarm optimizer
Introduction
With the development of the studies about self-organized behaviors in social insects, inspired by these biological phenomena, during the past two decades years, many swarm intelligence algorithms have emerged. These algorithms include cuckoo search algorithm [1], pigeon-inspired optimization [2], ant colony optimization [3], etc. In the real world, many optimization problems are often non-differentiable, NP-hard and difficult non-linear problems. The traditional techniques can not solve these optimization problems effectively. Swarm intelligence algorithms are proving to be better than the traditional techniques and thus are widely used [4]. For this reason, the research on swarm intelligence algorithms is a hot topic in computer science.
Inspired by the foraging behaviors of bird swarm, in 1995, Kenndey and Eberhart proposed particle swarm optimization (PSO) [5]. In PSO, in order to imitate the flight behavior of bird, two simple equations of motion are designed to guide the particles to search for globally optimal solution. As a population based iterative algorithm, the concept of PSO is simple, and PSO is easy to be implemented. In solving real-world engineering optimization problems, PSO has been successfully applied into diverse fields [6], [7], [8], [9]. Owing to these reasons, PSO has been one of the most popular and well-known algorithms in swarm intelligence [10].
However, many studies have shown that PSO was easy to get stuck at a local optimal solution region and had a slow convergence in solving the complex multimodal functions. In order to overcome these disadvantages, it is important that PSO achieves a good balance between exploration and exploitation. Lots of PSO variants have been proposed. These methods include tuning strategy of the parameters [11], [12], [13], [14], [15], changes of update equations and hybridization with other mechanisms [16], [17], [18], [19], [20] and multi-swarm technique.
During the last decade, multi-swarm technique has attracted increasing attention. Obviously, the information exchange among sub-swarms can better maintain the diversity of the population. So, this technology can greatly improve the performance of PSO. A number of PSO variants based on multi-swarm have been proposed [21], [22], [23], [24], [25], [26], [27], [28]. In 2007, Niu et al. proposed a multi-swarm cooperative particle swarm optimizer (MCPSO). A master-slave model was adopted in MCPSO. Furthermore, a multi-swarm cooperative particle swarm optimizer is proposed with a new fuzzy modeling strategy [29]. Zhang and Ding proposed a multi-swarm self-adaptive and cooperative particle swarm optimization (MSCPSO) [21]. MSCPSO used several strategies to avoid falling into local optimum and improve the diversity. Tatsumi et al. proposed chaotic multi-swarm particle swarm optimization using combined quartic functions (CMPSO-CQ) [30]. In CMPSO-CQ, a perturbation-based chaotic system is used to update a particle's position. Numerical experiments demonstrated that the CMPSO-CQ had the good performance. Lots of dynamic multi-swarm particle swarm optimizers are presented [31], [32], [33], [34], [35], [36], [37], [38]. A dynamic multi-swarm particle swarm optimizer (DMS-PSO) was proposed in 2005 [31]. Small swarm sizes were used and swarm topologies were dynamically changed in DMS-PSO. A regroup period parameter was used to control the information exchange among the sub-swarms. Furthermore, some improved versions of DMS-PSO were proposed. In order to improve local searching ability, in Ref. [34], the Quasi-Newton method was merged into DMS-PSO. In Ref. [35], DMS-PSO was combined with the harmony search algorithm to avoid all particles getting trapped in local optimal regions. The results of the tests showed that it had better performance on some multimodal and unimodal functions. Xu et al. proposed dynamic multi-swarm particle swarm optimizer with cooperative learning strategy (DMS-PSO-CLS) [37]. In DMS-PSO-CLS, for each sub-swarm, two worst particles can learn from the better particle of two randomly selected sub-swarms. This cooperative learning strategy made the information be used more effectively to generate better quality solutions. Liang et al. presented a self-adaptive dynamic particle swarm optimizer (SaDPSO) [36]. In this method, a self-adaptive strategy of parameters and information sharing mechanism for the best parameters were embedded.
Differential evolution (DE) is another popular, efficient and famous evolutionary algorithms (EAs) [39]. Owing to its remarkable performance, DE has been widely applied to lots of real-world applications [40], [41], [42], [43], [44]. There are three differential evolution operators: mutation, crossover, and selection. In order to improve DE's performance, different mutation operators were proposed [45], [46]. DE mutation can improve the diversity of the swarm [47]. However, due to the guidance of the global best information, the PSO is easy to be trapped into the in local optima. Obviously, DE can improve the performance of PSO, and hybrid studies for PSO and DE are still relatively insufficient.
Motivated by those observations, we propose a novel PSO variant, namely, DMSDL-PSO. Because multi-swarm has many advantages, this technique is adopted in DMSDL-PSO. Some research results showed that the velocity update equation of PSO restricted the exploration capability of the algorithm [48]. To overcome this shortcoming, the proposed DMSDL-PSO adopts a modified velocity update equation with learning model that is very popular in recent decade years. DE utilizes the particle's historical information to generate high-quality exemplars for each sub-swarm. These good examples can effectively provide a much better guidance for the particles to fly to a promising region faster. DMSDL-PSO has a very simple multi-swarm and two-layer structure for information exchange. The results of the tests also confirm that DMSDL-PSO has good performance for some benchmark functions.
The remainder of this paper is organized as follows. In Section 2, PSO, DMS-PSO and DE algorithms are briefly introduced. In Section 3, DMSDL-PSO is introduced in details. In Section 4, the parameter setting about DMSDL-PSO is investigated. The results of the tests are discussed and analyzed between several state-of-the-art algorithms and DMSDL-PSO. In Section 5, the experimental results on two real-world optimization problems are presented and discussed. In Section 6, the conclusions and the outlook of this work are given.
Section snippets
Particle swarm optimization
In PSO [49], a particle is used to represent the potential solution of the optimization problem. The whole particle swarm flies in the search space to search for the global optimum. Here, the parameter is used to represent the number of particles. In a -dimensional hyperspace, the velocity vector of the th particle () is represented as = (, ,, ), where and . The position vector of the th particle is = (, ,, ), where .
Dynamic multi-swarm differential learning particle swarm optimizer (DMSDL-PSO)
DMS-PSO based on the Quasi-Newton method is good at local exploitation. However, the ability of global exploration of the algorithm is not good. Numerical results show that DMS-PSO does show a disadvantage on some multimodal problems [50]. The results of the research indicate that the velocity update equation of PSO may make the particle oscillates in some cases or may make the particle trap into a local optimum in other cases, which causes premature convergence [48]. In order to solve these
Experimental study
This section firstly presents the benchmark functions and parameters setting for the following experiments. Then the investigation of parameters about DMSDL-PSO are provided in detail. Extensive experiments are subsequently conducted with several well-known algorithms based on 41 classical benchmarks. Experiment environment: MATLAB2012a; Win 7; Intel Pentium G3250 CPU; 3.2 GHZ; 8 GB RAM.
Application to two real-world problems
In this part, the DMSDL-PSO is applied to solve the practical problems to further test its performance. We apply DMSDL-PSO to parameter estimation for chaotic systems and Frequency-Modulated (FM) sound waves.
As a typical chaotic system, Lorenz system is employed. Dynamic equations of Lorenz system are represented as:Here, the values of the parameters used in Lorenz systems are = 10, = 28, = 3/2. In this simulation, the Lorenz system firstly evolves freely
Conclusion
In this paper, a differential learning particle swarm optimization algorithm has been proposed. In DMSDL-PSO, three different differential operators are adopted to construct better guiding vectors. These guiding vectors are able to guide particles to make efficient search. During the evolution process, information can be exchanged between different sub-swarms. Each sub-swarm adopts the same differential learning strategy. By making full use of these mechanisms, DMSDL-PSO has good searching
Acknowledgements
The authors thank all the Editor and the anonymous referees for their constructive comments and valuable suggestions, which are helpful to improve the quality of this paper. This paper is supported by the National Key Research and Development Program of China (Grant nos. 2016YFB0800602), the National Natural Science Foundation of China (Grant nos. 61370221, 61771071 and 61573067), the Plan For Scientific Innovation Talent of Henan Province (No. 164200510007) and the Program for Innovative
References (58)
- et al.
Cuckoo search via lévy flights
- et al.
Teaching–learning-based optimization: an optimization method for continuous non-linear large scale problems
Inf. Sci.
(2012) - et al.
A hybridization of an improved particle swarm optimization and gravitational search algorithm for multi-robot path planning
Swarm Evol. Comput.
(2016) - et al.
Local search based hybrid particle swarm optimization algorithm for multiobjective optimization
Swarm Evol. Comput.
(2012) - et al.
Particle swarm and box׳ s complex optimization methods to design linear tubular switched reluctance generators for wave energy conversion
Swarm Evol. Comput.
(2016) - et al.
An efficient two-level swarm intelligence approach for rna secondary structure prediction with bi-objective minimum free energy scores
Swarm Evol. Comput.
(2016) Interior search algorithm (isa): a novel approach for global optimization
ISA Trans.
(2014)- et al.
Control strategy pso
Appl. Soft Comput.
(2016) - et al.
A novel stability-based adaptive inertia weight for particle swarm optimization
Appl. Soft Comput.
(2016) - et al.
A new particle swarm optimization algorithm with adaptive inertia weight based on bayesian techniques
Appl. Soft Comput.
(2015)