Particle swarm optimization with damping factor and cooperative mechanism
Introduction
Particle swarm optimization (PSO) was firstly proposed by Eberhart and Kennedy [1], inspired by biosocial phenomena such bird flocking and fish schooling. POS is a population-based stochastic optimization technique, the basic notion of which is that social sharing of information among peers provides a great evolutionary advantage. In PSO and its extended algorithms, swarm populations are the candidate solution space of fitness functions and the agents of swarm are correspondingly called particles. Animals, especially birds and fishes, normally travel in groups without colliding. Each member adjusts its position and velocity using the collective and historical information, hence it reduces the individual’s effort for searching food or shelters. When compared with other evolutionary optimization algorithms, PSO has better computational efficiency for continuous optimal problems mainly because of less memory space requiring fewer model parameters to be adjusted and less effort to implement [2]. Thus, PSO and its extensions have many successful applications in practical engineering optimizations [3].
Most studies in the literature have focused on the parameter optimization for the standard PSO including the population size of swarm (), inertia weight (), accelerating factors (), constraint factor , velocity and position of particles () since the solution performance is sensitive to the selection of those parameters [4] (See Eq. (1)), where () and () are the th particle’s velocity and position in the th dimension at the time index , respectively; is the historical optimal position of particle , and is the collective optimal position.
Although many efforts have been put into developing new techniques for parameter selection for different optimal problems, most current techniques have not obviously improved the limitation of the standard PSO [5]. To improve the performance and convergence of PSO, many researchers have successfully developed new variants of PSOs based on the theories in other fields. These variants are mainly in the following three categories:
- (1)
biologically extended PSOs: PSO with an aging leader and challengers [6], social learning PSO [7], PSO with a cooperative approach [8], clubs-based PSO [9], group decision PSO [10], principle component PSO [11] and multigrouped PSO [12];
- (2)
physically extended PSOs: PSO with fine tuning operator [13], opposition-based PSO [14], PSO with Bayesian techniques [15], PSO with recombination and dynamic linkage discovery [16], PSO with fuzzy clustering [17], two-parts-divided PSO [18], chaos enhanced PSO [19] and gravitational global PSO [20];
- (3)
hybrid PSOs with other heuristic algorithms: PSO with simulated annealing and swarm core evolutionary [21], PSO with differential evolution [22], genetic learning PSO [23], hybrid PSO with artificial bee colony [24], hybrid PSO with artificial fish swarm [25].
Note that more computational effort is required when PSO is combined and/or hybridized with other theories and algorithms. In addition, velocity updating techniques are improved to avoid the premature convergence to local optimal points [26], [27]. Starting from a time-varying discrete dynamical system and stochastic process, three papers have proved that PSO needs for convergence to optimal points which are based on s second order difference equation [28], [29]. Tian [30] gives a review of convergence analysis in PSO and its extended algorithms.
This paper is motivated by the two issues of PSO and its variants: (i) difficulty in obtaining optima in a large-scale and high-dimensional space, and (ii) premature convergence to local optimal points [31]. A new damping factor is used to balance the exploring and exploiting abilities of particles, and a cooperative mechanism between the global-best-oriented and the local-best-oriented swarms is employed to help find global optima more quickly. A parameter, the least optimal particles in individuals’ histories, is adopted to decide whether current information of particles is abandoned and reinitialized in order to reduce the negative effect of unfavorable particles on swarm evolution. Also, fuzzy c-means clustering is applied to cluster the particles’ positions for the individuals’ neighborhood establishment in order to speed up convergence. The proposed approach PSO-DFCM has shown better performance in global optimum convergence and final optimal results, compared with the standard PSO and three state-of-art PSO versions.
The rest of this paper is organized as follows: Section 2 presents the algorithm by introducing the damping factor, cooperative mechanism and parameters choice. Section 3 discusses the experimental results. Concluding remarks are described in Section 4.
Section snippets
Damping movement inspired by general inertia law
In reality, many equilibrating systems will generate some resistant mechanisms to transient external forces that break the previous equilibrium states and bring themselves back into a new equilibrium, which is called General Inertia Law [32]. It has alternative explanations in various fields: Newton’s law in kinetics, Hooke’s law in a spring system, Lenz’s law in electromagnetics, Le Chatelier’s principle in chemical reaction systems and Estrous Cycle in biology. In PSO, optimal solutions
Experiments and results analysis
In order to test the performance of our proposed method, 24 benchmark functions are adopted from the paper [39] to validate our proposal algorithm. The benchmark functions in Table 1 have either a narrow valley, basin, or a huge number of local optima, which are challenging for optima-search algorithms. In the comparative study with the standard PSO and three state-of-art variants of PSO, it is demonstrated that the proposed method is more adaptive to large scale and high-dimensional searching
Conclusions
In this work, a novel variant of PSO (PSO-DPCM) is proposed considering damping factor and cooperative mechanism. Two versions of collective best particles are aggregated together to take advantage of the merits of both. Two swarms are employed to find out optimal positions based on the cooperative mechanism. In the local-best-oriented swarm, random walk is also incorporated into particles’ velocity updates as a perturbing term for the premature convergence avoidance of local best particles and
Acknowledgments
This work was supported by the National Natural Science Foundation of China (41274109), the Innovative Team Project of Sichuan Province, China (2015TD0020), and the Research Fund of State Key Laboratory of Geohazard Prevention and Geoenvironment Protection, China (SKLGP2016Z009).
References (40)
- et al.
A review of particle swarm optimization and its applications in solar photovoltaic system
Appl. Soft Comput.
(2013) - et al.
A social learning particle swarm optimization algorithm for scalable optimization
Inform. Sci.
(2015) - et al.
A new particle swarm optimization algorithm with adaptive inertia weight based on Bayesian techniques
Appl. Soft Comput.
(2015) - et al.
Gravitational swarm optimizer for global optimization
Swarm Evol. Comput.
(2016) - et al.
Evolving cognitive and social experience in particle swarm optimization through differential evolution
Inform. Sci.
(2012) - et al.
Evolving RBF neural networks for rainfall prediction using hybrid particle swarm optimization and genetic algorithm
Neurocomputing
(2015) - et al.
Hybrid artificial bee colony algorithm and particle swarm search for global optimization
Math. Probl. Eng.
(2014) - et al.
Particle swarm optimisation for feature selection in classification: novel initialisation and updating mechanisms
Appl. Soft Comput.
(2014) - et al.
Particle swarm optimization using dynamic tournament topology
Appl. Soft Comput.
(2016) - J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural...
A review on nature-based swarm intelligence optimization techniques and its current research directions
Indian J. Sci. Technol.
Review on mining data from multiple data sources
Pattern Recognit. Lett.
Particle swarm optimisation for feature selection in classification: a multi-objective approach
IEEE Trans. Cybern.
Particle swarm optimization with an aging leader and challengers
IEEE Trans. Evol. Comput.
A cooperative approach to particle swarm optimization
IEEE Trans. Evol. Comput.
Principal component particle swarm optimization
IEEE Congress Evol. Comput.
Multimodal function optimization based on particle swarm optimization
IEEE Trans. Magn.
Cited by (25)
Heat cascade and heuristics to optimize ORC and identify the best internal configuration
2023, Applied Thermal EngineeringDilated Adversarial U-Net Network for automatic gross tumor volume segmentation of nasopharyngeal carcinoma
2021, Applied Soft ComputingMultiswarm spiral leader particle swarm optimisation algorithm for PV parameter identification
2020, Energy Conversion and ManagementCitation Excerpt :In [47], two parameters of a PV model with three diodes were determined analytically, and the remaining parameters were determined using the sunflower optimisation (SFO) algorithm. Further, multiswarm metaheuristics are a promising alternative and have been gaining increasing attention from researchers in both scientific and engineering areas [48–55]. The main advantage of this approach is the ability to explore different regions of a multidimensional search space simultaneously [5].
Intelligent optic disc segmentation using improved particle swarm optimization and evolving ensemble models
2020, Applied Soft Computing JournalCitation Excerpt :The hybrid clustering model and deep networks showed superior capabilities in lesion segmentation and classification, and outperformed a number of state-of-the-art models based on the ISIC 2017 data set and a mixed data set. He et al. [32] proposed a modified PSO model with a cooperative strategy and a damping factor to accelerate convergence and avoid stagnation. The damping factor was used to determine the impact of each particle’s previous position with respect to its new position in the next iteration.
A modified surrogate-assisted multi-swarm artificial bee colony for complex numerical optimization problems
2020, Microprocessors and MicrosystemsCitation Excerpt :PSO algorithm based on individual difference evolution (IDE-PSO) proposed by Jin Gou et al. [31] brought a competition coefficient called the emotional status to each particle for quantifying individual differences and partition the population into three subgroups with different update equations. A modified particle swarm optimization with damping factor and cooperation mechanism (PSODFCM) proposed by He et al. [32] divided the population into the global-best-oriented and the local-best-oriented swarms to increase the convergence rate. Many other researchers also employed multi-swarm strategy to enhance the EAs’ search ability.