Multiple scale self-adaptive cooperation mutation strategy-based particle swarm optimization

https://doi.org/10.1016/j.asoc.2020.106124Get rights and content

Highlights

  • The method applies multi-scale Gaussian mutations as basic mutation strategy.

  • The method ranks particles into several subgroups to search different regions.

  • The method uses decreasing standard deviations for global and local search balance.

  • The method self-adaptively adjusts the mutation threshold for each dimension.

Abstract

Particle Swarm Optimization (PSO) algorithm has lately received great attention due to its powerful search capacity and simplicity in implementation. However, previous studies have demonstrated that PSO still suffers from two key drawbacks of premature convergence and slow convergence, especially when dealing with multi-modal optimization problems. In order to address these two issues, we propose a multiple scale self-adaptive cooperative mutation strategy-based particle swarm optimization algorithm (MSCPSO) in this paper. In the proposed approach, we adopt multi-scale Gaussian mutations with different standard deviations to promote the capacity of sufficiently searching the whole solution space. In the adopted multi-scale mutation strategy, large-scale mutation can make populations explore the global solution space and rapidly locate the better solution area at the early stage, thus avoiding the premature convergence and simultaneously speeding up the convergence, while small-scale mutation can allow the populations to more accurately exploit the local best solution area during the later stage, thus improving the accuracy of final solution. In order to guarantee the convergence speed while avoiding premature convergence, the standard deviations for multi-scale Gaussian mutations would be reduced with the increase of iterations, which can make populations pay more attention to local accurate solution exploitation during the later evolution stage and consequently speed up the convergence. In addition, the threshold for each dimension to execute mutation is also dynamically adjusted according to its previous mutation frequency, which can allow MSCPSO to better balance the global and local search capacities, thus avoiding premature convergence without reducing convergence speed. The extensive experimental results on various benchmark optimization problems demonstrate that the proposed approach is superior to other existing PSO techniques with good robustness.

Introduction

Particle swarm optimization (PSO) was originally proposed by Eberhart and Kennedy in 1995 [1]. As an optimization algorithm based on swarm intelligence [2], PSO searches for the optimal solution by imitating the social behavior of organisms such as birds. Like a bird, a candidate solution in PSO, sometimes called a particle, is generally composed of two key components: flying velocity and current position, which are dynamically adjusted according to its own flying historical information and the whole social swarm flying experience during the evolutionary process. Due to its simplicity in structure and implementation, PSO has not only shown good performance in various mathematics optimization fields including the target function optimization [3], [4] and neural network training [5], [6], and etc., but also has been widely applied in many real optimization problems such as manufacturing control in engineering optimization [7], [8], multi-source scheduling in cloud computing [9], [10], and other industry problems [11]. Unfortunately, like other classical swarm intelligence optimization algorithms, PSO still suffers from the problems of slow convergence and premature convergence, especially when dealing with the large-scale complex optimization problems.

To speed up the convergence of PSO, researchers began to pay more attention on parameter improvement, especially on the inertia weight. Inertia weight was firstly introduced into the velocity update equation by Shi and Eberhart [12], and afterward they further proposed a fuzzy adaptive method for the inertia weight adjustment in the modified PSO [13], which has been proven to greatly improve the performance of PSO. Based on their work, many variants of PSO algorithm have been proposed in the past few decades. For instance, Li N et al. [14] proposed a hybrid particle swarm optimization (HPSOFW), which establishes a novel search behavior modal by incorporating fuzzy reasoning and a weighted particle, thus improving the searching capability of conventional PSO algorithm. Similarly, Nesamalar et al. [15] also used a fuzzy inference system to dynamically update the inertia weight. In addition, Wang et al. [16] proposed a self-adaptive learning based PSO (SLPSO) to adaptively select the most suitable inertia weights update strategies according to different stages of the evolutionary process. Different from the above methods adjusting the inertia weight, Chen et al. [17] shifted their focus to acceleration coefficients and proposed a hybrid particle swarm optimizer with sine cosine acceleration coefficients (H-PSO-SCAC) by adopting a sine map to adjust the acceleration coefficients. Although some advance has been made in improving convergence speed of PSO, high convergence speed especially in the early stage often tends to make the PSO algorithm prone to be trapped into local optimums and thus results in premature convergence.

In order to avoid premature convergence, researchers started to shift their concentration from parameters improvement to the population topology modification of PSO. Inspired by the study of Michalewicz [18] about population structure of evolutionary algorithms, Cesare et al. [19] used a stochastic Markov chain modal to define an intelligent topological structure of the swarm’s population, in which the better particles have an important influence on the others. In addition, Wang et al. [20] also proposed a dynamic tournament topology strategy to improve PSO. In the proposed method, each particle is guided by several better solutions, which are randomly selected from the entire population. Although the selection of the better particles is stochastic, the reported experimental results demonstrated it still favors particles with better solutions. To keep the diversity of population and thus overcome the premature convergence, new population topology structures focusing more on local best particles for PSO were introduced in [21], [22] and have been proved to work well in some cases. For the aforementioned PSO variants, although the swarm diversity can be maintained and the premature convergence is also avoided as reported, the intrinsic PSO evolutionary topology may be destroyed due to the introduction of complicated structure and thus leads to the reduction of the convergence speed. In order to address the dilemma, many researchers attempted to change the velocity update formula of PSO. For example, Liang et al. [23] attempted to use all other particles’ historical best information to update a candidate particle’s velocity via a learning strategy, which can allow the diversity of the swarm to be preserved and thus hinders premature convergence without reducing the convergence speed. Based on the same idea, Meng et al. [24] proposed the fully informed PSO (FIPSO), which also uses the information available from all the neighbors instead of the best one to guide a candidate particle evolution. Unlike the above proposed method, Cui et al. [25] utilized the predicted globally-optimal solution rather than all neighbor’s information to adjust the velocity of a candidate particle. In addition to modifying the velocity update formula, some new learning strategies were also introduced to keep the balance between avoiding premature convergence and speeding up the convergence. For instance, Wang et al. [26] presented an enhanced PSO algorithm called GOPSO by combining the generalized opposition-based learning and Cauchy mutation. Moreover, Huang et al. [27] used an example set of multiple global best particles to update the positions of the particles, and proposed an example-based learning PSO (ELPSO) to balance the swarm diversity and convergence speed. Similarly, Cao et al. [28] also introduced a “worst replacement” strategy to update the swarm, where the position of the worst particle in the swarm is replaced by a better newly generated position, and the reported experimental results proved that the strategy is beneficial to the improvement of PSO convergence.

In recent years, some research demonstrated that integrating evolutionary mechanisms from other algorithms into PSO could also effectively improve the performance of PSO, such as simulated annealing algorithm (SA) and genetic algorithm (GA). As a case of integration with SA algorithm, Dong et al. [29] proposed a hybrid Particle swarm optimization algorithm to search a set of Pareto-optimal solutions, which employs the simulated annealing as a local search strategy to exploit the local solution space. Similarly, Wang et al. [30] adopted both SA algorithm and artificial neural network to enhance the global searching ability of PSO and thus developed a modified PSO, which was used to solve source estimation problems. In order to utilize some advantages of genetic algorithm, Tam et al. [31] integrated GA into PSO and presented a hybrid PSO method, which uses GA with two-point standard mutation and one-point refined mutation to further refine the exploitative search of PSO. Instead of directly combining two algorithms, some researchers attempted to introduce the mutation operation of the evolutionary algorithm into PSO to further strengthen its global space search ability and thus avoid premature convergence. For instance, Wei et al. [32] used polynomial mutation to maintain diversity in the external archive, which can effectively enhance the search capability of the algorithm and to prevent particles from falling into local optimum and premature convergence, but the convergence rate needs to be improved. Afterward, Chen et al. [33] also proposed a modified PSO algorithm with two differential mutations. Two adopted different differential mutation operations are assigned with two different control parameters, which can allow the particles in the top layer to sufficiently search global solution space. Similarly, Cheng et al. [34] further introduced the multi-dimensional uniform mutation operator to prevent algorithm from trapping into local optimum. In addition, Tong et al. [35] proposed a PSO optimization scale-transformation stochastic-resonance algorithm with stability mutation operator. Although this proposed method could promote the iteration speed and stability, the accuracy of its final solution should be further improved. Recently, Wang et al. [36] presented a hybrid PSO algorithm by employing an adaptive learning mutation strategy (ALPSO). Although ALPSO can obtain some good solutions in some cases as reported, the condition set for competitive learning mutation strategy used in ALPSO seems to be hard to be satisfied when swarm frequently gets trapped into local optima, which hence leads to the wastes of some evolutions and fitness evaluations, thus reducing the convergence speed. And Gholami et al. [37] presented a improved PSO algorithm by modifying personal best particle update strategy (MPBPSO) to schedule renewable generation in a micro-grid under load uncertainty and the experimental results demonstrated its effectiveness. In addition, Wang et al. [38] proposed a self-adaptive mutation differential evolution algorithm based on particle swarm optimization (DEPSO), which utilizes an improved DE mutation strategy with stronger global exploration ability and PSO mutation strategy with higher convergence ability to preserve its global solution searching capacity. In fact, although the above developed PSO algorithms can show better performance than traditional PSO on some cases due to the introduction of mutation, the single uniform mutation mechanism often fails to explore the better solution space for the population to escape from the local minimum especially when dealing with complex optimization problems, which can result in the waste of evolution iterations and even the failure of avoiding the premature convergence within the pre-specified finite iteration. In addition, the inappropriate required condition for mutation, which is set for balancing the PSO inherent evolution and additional mutation operation, can also be one key reason for no significant improvement in the performance of PSO.

In order to address the above limitations, we propose a multiple scale self-adaptive cooperative mutation strategy-based particle swarm optimization algorithm (MSCPSO) in this paper. In the proposed PSO algorithm, we adopted multi-scale Gaussian mutations with different standard deviations to guarantee the capacity of exploring better solution space and thus avoid premature convergence. Specifically, the large-scale mutation is responsible for exploring wider solution space region around a candidate particle, while the small-scale mutation is used to search local neighbor region around it. This mutation mechanism with multiple disperse scales can make the candidate particle more likely to find out the better solution than it and consequently help it escape from the local optimum. In order to ensure the convergence speed, standard deviations for multi-scale Gaussian mutations are dynamically set proportional to the fitness value of the whole population. Since the fitness value of the whole population would become smaller along with the iteration increasing when solving minimization problems, standard deviations for multi-scale Gaussian mutations would be reduced with the increase of iteration. These gradually decreasing standard deviations can make populations pay more attention to local accurate solution exploitation during the later evolution stage, which is favorable to speed up the convergence and improve the accuracy of final solution. Different from other PSO algorithms with mutations, the setting of mutation threshold condition and mutation operation are based on each dimension of solutions rather than the whole population or the global best solution. In addition, mutation threshold condition for each dimension is also self-adaptively adjusted according to its previous mutation frequency. This strategy can better balance the PSO inherent evolution and additional mutation operation such that the global search ability is maintained without losing the local search capacity. Note that the multi-scale mutation mechanism is similar to the idea of multiple subpopulations with different mutation strengths used in nested evolution strategies [2] to certain an extent since they both attempt to well balance the exploration and exploitation capabilities. However, the difference is that as mentioned above, the standard deviations for multi-scale Gaussian mutations are dynamically changed with the fitness value of the whole population, which can not only guarantee the capacity of exploring better solution space in the early stage but also ensure the convergence tendency in the later stage. In addition, the multi-scale mutation operations are dimensionally performed on the same particle satisfying pre-mature conditions to help it escape local optima with great chance rather than different scales for different particles, which is beneficial to keep a good tradeoff between PSO evolution strategy and multi-scale mutation strategy. The extensive experimental results on benchmark optimization problems show that the proposed algorithm outperforms other state-of-the-art modified PSO algorithms with statistical significance.

The remainder of this paper is organized as follows: A brief introduction of standard PSO is described in Section 2. Section 3 presents the details of Particle swarm optimization algorithm based on multiple scale self-adaptive cooperative mutation. In addition, we also analyze the mechanism of the proposed PSO algorithm and computational complexity. Section 4 shows the experimental results and discussions on extensive benchmark functions. Finally, Conclusions are drawn in Section 5.

Section snippets

Standard PSO algorithm

PSO is a swarm intelligence algorithm which was proposed by Eberhart and Kennedy [1] in 1995. Its main idea originated from imitation of the swarm behaviors in some ecosystems such as insects foraging and birds flying, which usually search for food or fly in a cooperative way. Specifically, each member of the swarm dynamically adjusts its searching or flying trajectory by learning its own experience and that of other members. Like an insect or a bird, a member of PSO swarm is also characterized

Multiple scale cooperative mutation strategy-based PSO algorithm

In the aforementioned modified algorithms with mutation, the mutation operation applied in the premature particles aims to help them escape from the current trapped local optimum and thereby avoid premature convergence. Therefore, whether to successfully escape from the local optimum completely relies on the results of mutation operation. Suppose that the mutation results are evaluated by their fitness values, and then the good mutation result means that the mutated particle has better fitness

Benchmark functions and experimental configurations

In order to demonstrate the effectiveness of the proposed PSO algorithm to avoid the premature convergence and speed up the convergence, we firstly carried out two groups of optimization experiments on 2 basic benchmark unimodal functions and 3 basic benchmark multimodal functions, respectively, which are widely used as optimization test functions by most of the literatures. The detailed information regarding these used functions is summarized in Table 1 and all of those functions have only a

Conclusion

In the paper, we proposed a Multiple Scale self-adaptive cooperative mutation strategy-based particle swarm optimization. In the proposed PSO algorithm, multi-scale mutation strategy is firstly introduced to guarantee the populations to sufficiently search in the whole solution space during evolution. On one hand, large-scale mutation can help the populations explore the global solution space and rapidly locate the better solution area at the early stage. On the other hand, small-scale mutation

CRediT authorship contribution statement

Xinmin Tao: Conceptualization, Methodology, Software, Writing - original draft. Wenjie Guo: Software, Investigation, Formal analysis. Qing Li: Data curation, Investigation. Chao Ren: Writing - review & editing. Rui Liu: Visualization.

Declaration of Competing Interest

No author associated with this paper has disclosed any potential or pertinent conflicts which may be perceived to have impending conflict with this work. For full disclosure statements refer to https://doi.org/10.1016/j.asoc.2020.106124.

Acknowledgments

This work was supported in part by the Fundamental Research Funds for the Central Universities, China no. 2572017EB02, 2572017CB07, Innovative talent fund of Harbin science and technology Bureau, China (No. 2017RAXXJ018), Double first-class scientific research foundation of Northeast Forestry University, China (411112438). We thank Qing He and Junrong Zhou for their assistance for the experiment designs in the revised manuscript. The authors would like to thanks anonymous reviewers for their

References (40)

Cited by (28)

  • Unified whale optimization algorithm based multi-kernel SVR ensemble learning for wind speed forecasting

    2022, Applied Soft Computing
    Citation Excerpt :

    At present, for the above parameter selection problem, many scholars have proposed relevant schemes. By involving heuristic algorithms, such as artificial bee colony (ABC) [20], fruit fly optimization algorithm (FOA) [21], simulated annealing [22], cuckoo search optimization (CSO) [16], etc., various researches have paid special attention to the study of parameter optimization in SVR modeling [23–26]. Balogun et al. [27] employ gray wolf optimization (GWO), CSO and bat algorithm (BA) to adjust the parameters of SVR model, respectively, and the results show that the CSO-based SVR had the best prediction performance, followed by the GWO-based SVR.

  • Multi-space collaboration framework based optimal model selection for power load forecasting

    2022, Applied Energy
    Citation Excerpt :

    By combining chaos idea and random optimization method, Liu et al. [38] propose a chaotic swarm intelligence optimization, where the particle is adjusted by using chaos method when it is in a stagnant state. Tao et al. [39] propose a multi-scale search algorithm to make particle more mutant, which can significantly improve the probability of selecting the optimal model and avoid unnecessary evaluation of local extremum. In addition, a hybrid metaheuristic algorithm-based adaptive robust optimization, which combines the SCA and the CSA, is proposed to solve the optimal parameters for uncertainty modeling [40].

  • Fitness peak clustering based dynamic multi-swarm particle swarm optimization with enhanced learning strategy

    2022, Expert Systems with Applications
    Citation Excerpt :

    Similarly, Wang, Zhang, and Li (2018) also reported an adaptive mutation strategy (ALPSO) to improve population diversity and thus help PSO get rid of local optimum. In our previous study (Tao & Guo, 2020), we also presented a novel PSO variant which combines multiple scale mutation strategy with self-adaptive threshold to realize the trade-off between exploitation and exploration. Learning strategy: Apart from parameter tuning, topology-modified, and hybrid strategy, the adoption of different learning strategies to balance exploration and exploitation has attracted considerable attention from researchers.

  • Adaptive multi-objective particle swarm optimization with multi-strategy based on energy conversion and explosive mutation

    2021, Applied Soft Computing
    Citation Excerpt :

    Therefore, the population is divided into three classes, and customized optimization strategies are formulated for the particles in different classes. Due to the inherent defects of canonical MOPSO, mutation operator is usually adopted to escape from the local optimum and expand the exploration scope [57]. Supposing that a promising optimization ability wants to be achieved by MOPSO in solving complex MOPs, the mutation component should play an important role in strengthening the optimization ability except for solving the local optimal problem.

  • Self-Adaptive two roles hybrid learning strategies-based particle swarm optimization

    2021, Information Sciences
    Citation Excerpt :

    Since our work mainly focuses on parameter tuning strategy and learning modified strategy. We only provide literature reviews on PSOs based on the above two strategies, for more details on the other two ones, please refer to [42]. Numerous studies reveal that the appropriate strategy for parameter settings will be helpful to enhancing the performance of PSO.

View all citing articles on Scopus
View full text