Particle swarm optimization with adaptive learning strategy
Introduction
Particle swarm optimization (PSO), originally introduced by Kennedy and Eberhart in 1995 [1], [2], is a population-basedstochastic optimization technique. Due to its simple implementation and efficiency in exploring global solutions, PSO has been applied successfully to many problems such as classification [3], [4], feature selection [5], task assignment [6], [7], and stochastic optimization [8].
In the canonical PSO algorithm, every particle represents a potential solution to an optimization problem and is defined in terms of two vectors, the velocity and the position . Initially, the particles are randomly positioned in a D-dimensional search space with random velocity values. During the evolution process, each particle updates its velocity and position in accordance with the following learning strategy: where , the positive constants and are acceleration coefficients, and are two uniformly distributed random numbers in the range , is the previous best position of particle , and is the global best position found by all particles thus far. This learning strategy contributes to fast convergence behavior in the PSO algorithm; however, because only the search information of is used to guide the search direction, the diversity of the population is lost, which increases the possibility of falling into local optima. Therefore, the adoption of efficient learning strategies that can maintain high diversity is a crucial task in PSO.
Many PSO algorithms with different learning strategies have been developed to increase the population diversity. Mendes et al. [9] proposed a fully informed PSO (FIPS) algorithm, in which the velocity adjustment for a particle is learned from the complete information of its entire neighborhood. Liang et al. [10] introduced a comprehensive learning strategy for PSO (CLPSO) to preserve the population diversity. In CLPSO, the best search information from all other particles is used to guide the velocity update. Chen et al. [11] developed a PSO algorithm with an aging leader and challengers (ALC-PSO). In ALC-PSO, an aging mechanism is implemented in which a leader particle with a certain lifespan leads the updating process for the swarm. Although the aforementioned learning strategies show good performance, they are designed only for a single swarm in which the diversity is not well maintained.
Recently, the multiswarm technique [12] has attracted considerable attention. In multiswarm optimization, the entire swarm is divided into a set of subswarms, and each subswarm focuses on one specific region of the search space. Lynn et al. [13] proposed a heterogeneous comprehensive learning PSO (HCLPSO) algorithm, in which the swarm is divided into two subswarms and the particles are updated via a comprehensive learning strategy. Xu et al. [14] developed a PSO algorithm based on a dimensional learning strategy (TSLPSO). Unlike in HCLPSO, one of the two subswarms in TSLPSO is specially employed to enhance the population diversity and convergence speed by means of a dimensional learning strategy. Although these algorithms improve the performance of PSO, each subswarm works independently, without interaction. As a further development, Liang et al. [15] introduced a dynamic multiswarm PSO (DMS-PSO) algorithm, which uses a regrouping schedule to exchange information among the subswarms. Based on DMS-PSO and CLPSO, Nasir et al. [16] developed a dynamic neighborhood learning PSO (DNLPSO) algorithm, in which each particle learns not only from itself but also from other particles in its subswam. However, these learning strategies are based on a fixed subswarm size, which affects the robustness and computation cost of PSO. To determine a more appropriate population size, Chen et al. [17] proposed a scheme in which the subswarm population size can be adjusted dynamically in accordance with the population diversity. However, this algorithm predefines upper and lower bounds on the population size and thus is not fully adaptive in essence. Although this algorithm offers improved population diversity compared with the aforementioned PSO variants, the nonadaptive determination of the population size leads to imprecise subswarm division, which affects the ability of the learning strategy to maintain diversity.
Based on the above analysis, we propose a novel PSO algorithm with an adaptive learning strategy (PSO-ALS). In contrast to existing learning strategies, the proposed strategy is based on adaptive determination of the subswarm size, which is an intelligent and efficient way to maintain the population diversity so as to improve the global search ability. In PSO-ALS, first, by means of a fast searching clustering method, the swarm is adaptively grouped into several subswarms, in which the particles are classified into ordinary particles and the locally best particle. Second, for the ordinary particles in each subswarm, the locally best particle in that subswarm, rather than the global best particle, is considered in the learning strategy to enhance the population diversity. Third, the locally best particle in each subswarm learns from the average information of the locally best particles in all subswarms to further promote the population diversity via information exchange. Finally, two proposed learning strategies without an explicit velocity are devised to accelerate the convergence speed of PSO-ALS. The main contributions of this paper are listed as follows:
- •
We propose a novel PSO algorithm based on an adaptive learning strategy that can afford great diversity enhancement, thus helping the optimizer to avoid local optima.
- •
The proposed learning strategy is based on adaptive subswarm division. Compared with a fixed orparameter-assisted subswarm size, an adaptivelydetermined subswarm size serves as a more accurate foundation for the implementation of a learning strategy. Moreover, two different learning strategies are specifically devised for different types of particles to enhance the population diversity.
- •
The learning strategies are further simplified to an expression without an explicit velocity term to accelerate the convergence speed.
The rest of this paper is organized as follows. Section 2 reviews the related studies on techniques for diversity preservation. Section 3 introduces PSO-ALS and presents the procedures of the algorithm in detail. In Section 4, the performance of PSO-ALS is validated based on a variety of experiments. Finally, conclusions and future work are discussed in Section 5.
Section snippets
Related works
Maintaining population diversity has been a crucial aspect of multimodal optimization problems. Not only in PSO research but also in the context of many other evolutionary algorithms [18], [19], many approaches have been proposed for improving population diversity, such as niching techniques and multiswarm techniques. In this section, we introduce the related works concerning these techniques.
The original niching technique was introduced by Cavicchio in early 1970 [20]. In view of the wide
The proposed PSO algorithm with adaptive learning strategy
In this section, the proposed PSO algorithm with an adaptive learning strategy is described in detail. Fig. 1 illustrates the main process of the PSO-ALS algorithm. During the search process, the swarm is adaptively divided into several subswarms in accordance with the distribution of the particles. In each subswarm, we employ two different learning strategies to guide the search directions of two different types of particles. The search process stops when the global optimum is found or when
Experimental verification and comparisons
A variety of experiments are presented in this section to evaluate the performance of the proposed PSO-ALS algorithm. We first describe the benchmarks and experimental settings. Second, we compare the proposed PSO-ALS algorithm with other PSO variants in terms of solution accuracy and convergence speed. Third, we carry out a statistical significance test of the experimental results. Fourth, the computation times of different PSO algorithms are discussed. Fifth, we analyze the performance of
Conclusion and future work
A PSO algorithm with an adaptive learning strategy is proposed in this paper. During the evolution process, the whole swarm is adaptively clustered into several subswarms. In each subswarm, particles of different types are updated in accordance with different learning strategies. After all the subswarms have been updated, the global best value is obtained by comparing the fitness values of all of the locally best particles.
In PSO-ALS, the subswarm size is determined adaptively in accordance
CRediT authorship contribution statement
Yunfeng Zhang: Conceptualization, Methodology, Software, Investigation, Writing - original draft, Writing - review & editing. Xinxin Liu: Conceptualization, Methodology, Software, Investigation, Writing - original draft, Writing - review & editing. Fangxun Bao: Conceptualization, Methodology, Software, Investigation, Writing - original draft, Writing - review & editing. Jing Chi: Resources, Visualization, Formal analysis. Caiming Zhang: Project administration, Writing - review & editing,
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
The authors are thankful for the anonymous referee’s constructive comments. This work was supported by the National Natural Science Foundation of China (Grant Nos. 61972227,61672018, 61772309, 61873117, and U1609218), the Natural Science Foundation of Shandong Province (Grant Nos. ZR2019MF051 and ZR201808160102), the Primary Research and Development Plan of Shandong Province (Grant Nos. GG201710090122,2017GGX10109 and 2018GGX101013), the Fostering Project of Dominant Discipline and Talent Team
References (46)
A hybrid particle swarm optimization approach for clustering and classification of datasets
IEEE Trans. Power. Syst.
(2011)- et al.
The synergistic combination of particle swarm optimization and fuzzy sets to design granular classifier
IEEE Trans. Evol. Comput.
(2015) - et al.
Entropic simplified swarm optimization for the task assignment problem
Appl. Soft Comput.
(2017) - et al.
Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation
Swarm Evol. Comput.
(2015) - et al.
Particle swarm optimization based on dimensional learning strategy
Swarm Evol. Comput.
(2019) - et al.
A dynamic neighborhood learning based particle swarm optimizer for global numerical optimization
Inform. Sci.
(2012) - et al.
Particle swarm optimization with adaptive population size and its application
Appl. Soft Comput.
(2009) - et al.
On the performance of artificial bee colony (ABC) algorithm
Appl. Soft Comput.
(2008) - et al.
Ensemble particle swarm optimizer
Appl. Soft Comput.
(2017) - et al.
Particle swarm optimizer with crossover operation
Eng. Appl. Artif. Intell.
(2018)