Skip to main content
Log in

An analysis of the velocity updating rule of the particle swarm optimization algorithm

  • Published:
Journal of Heuristics Aims and scope Submit manuscript

Abstract

The particle swarm optimization algorithm includes three vectors associated with each particle: inertia, personal, and social influence vectors. The personal and social influence vectors are typically multiplied by random diagonal matrices (often referred to as random vectors) resulting in changes in their lengths and directions. This multiplication, in turn, influences the variation of the particles in the swarm. In this paper we examine several issues associated with the multiplication of personal and social influence vectors by such random matrices, these include: (1) Uncontrollable changes in the length and direction of these vectors resulting in delay in convergence or attraction to locations far from quality solutions in some situations (2) Weak direction alternation for the vectors that are aligned closely to coordinate axes resulting in preventing the swarm from further improvement in some situations, and (3) limitation in particle movement to one orthant resulting in premature convergence in some situations. To overcome these issues, we use randomly generated rotation matrices (rather than the random diagonal matrices) in the velocity updating rule of the particle swarm optimizer. This approach makes it possible to control the impact of the random components (i.e. the random matrices) on the direction and length of personal and social influence vectors separately. As a result, all the above mentioned issues are effectively addressed. We propose to use the Euclidean rotation matrices for rotation because it preserves the length of the vectors during rotation, which makes it easier to control the effects of the randomness on the direction and length of vectors. The direction of the Euclidean matrices is generated randomly by a normal distribution. The mean and variance of the distribution are investigated in detail for different algorithms and different numbers of dimensions. Also, an adaptive approach for the variance of the normal distribution is proposed which is independent from the algorithm and the number of dimensions. The method is adjoined to several particle swarm optimization variants. It is tested on 18 standard optimization benchmark functions in 10, 30 and 60 dimensional spaces. Experimental results show that the proposed method can significantly improve the performance of several types of particle swarm optimization algorithms in terms of convergence speed and solution quality.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. In general, the personal best can be a set of best positions, but all PSO types listed in this paper use a single personal best.

  2. These two coefficients control the effect of \(\vec {p}_{t}^{i}\) and \(\vec {g}_{t}\) on the movement of particles and they play an important role in the convergence of the algorithm. They are usually determined by a practitioner (Oca et al. 2009) or by the dynamic of the particles’ movement (Clerc and Kennedy 2002).

  3. Alternatively, these two random matrices are often considered as two random vectors (Kennedy and Eberhart 1995). In this case, the multiplication of these random vectors by PI and SI is element-wise (Hadamard product).

  4. Note that this formulation (Eq. 4) is algebraically equivalent to the one in Eq. 3.

  5. This phenomenon (known also as “drunkard’s walk”) refers to the unlimited growth of velocity vectors and happens when the control parameters are defined randomly (Clerc and Kennedy 2002).

  6. Note that the model of PSO in that paper was formulated the same as InertiaPSO but the values of parameters were selected according to the formulation in CoPSO.

  7. For the sake of simplicity, the 2-dimensional space is considered for the presented examples. However, the results can be generalized to \(d\)-dimensional spaces.

  8. Note that this property holds for \(d\)-dimensions as well where \(\theta \) and \(\theta \)’ are considered as angles between the vector \(\vec {v}^{\prime }\) and an arbitrary axis of the coordinate.

  9. Note that, although the changes that result from applying Rndms may help the algorithm to jump out of local optima, the controllable changes are more desirable because the exploration/exploitation balance in the algorithm is adjustable.

  10. A special case of this issue occurs when one of the elements of PI or SI is zero (\({\vec {x}}\) and \({\vec {p}}\) or \({\vec {x}}\) and \({\vec {g}}\) have equal values for one of their dimensions). In this case, the search becomes a random scale in that dimension (Clerc 2006; Spears et al. 2010; Van den Bergh and Engelbrecht 2010).

  11. Note that this situation may happen with non-zero probability.

  12. Note that these are the parameters that have been considered in this paper and there might be some others that have an effect on the value of \(\sigma \).

  13. Note that the maximum number of FE in this test (Table 5) is 10,000 which is different from the one in Table 3, 200,000. Thus, the difference between the results in these two tables is expected.

    Table 5 Results of applying CoPSO and CoPSO-Rotm to 18 benchmarks with different number of dimensions

Abbreviations

ABSQ:

Average of Best Solution Qualities: the average of solution qualities over 100 runs of an algorithm when it is applied to a benchmark function

AIWPSO:

Adaptive Inertia Weight PSO: one of the variants of PSO

CoPSO:

Constriction coefficient PSO: one of the variants of PSO

Decreasing-IW:

Decreasing Inertia Weight: one of the variants of PSO

FE:

Function evaluation: the number of times that the under-process function is evaluated during the optimization process in one run

GCPSO:

Guaranteed Convergence PSO: one the varients of PSO

Increasing-IW:

Increasing inertia weight: one of the variants of PSO

PI/SI:

Personal influence/social influence: personal and Social Influence (see Eq. 2)

pPSA:

Perturbed particle swarm algorithm: one of the variants of PSO

PSO:

Particle swarm optimization

Rndm :

Diagonal Random Matrices: d*d diagonal matrices with randome values uniformly distributed in [0, 1].

Rotm :

Rotation about the origin in all possible planes with predifined angles (\(\alpha _{ij})\) for each plane (see Eq. 10)

Rot(\(\mu \),\(\sigma )\) :

Rotation about the origin in all possible planes with random angles for each plane normaly distributed (\(\alpha _{ij} \sim \mathcal{N}(\mu ,\sigma ))\) (see Eq. 10)

Rotm :

Rotation about the origin in all planes with random angles each of which generated according to normal distribution with appropriate variance (see Fig. 7) and \(\mu \)=0

Stochastic-IW:

Stochastic Inertia Weight PSO: one of the variants of PSO

StdPSO2006:

Standard PSO proposed in 2006: one of the variants of PSO

WPSO:

Wilke’s PSO: one of the varients of PSO

References

  • Chen, D.B., Zhao, C.X.: Particle swarm optimization with adaptive population size and its application. Appl. Soft Comput. 9(1), 39–48 (2009)

    Article  Google Scholar 

  • Chow, T.L.: Mathematical Methods for Physicists: A Concise Introduction. Cambridge University Press, Cambridge (2000)

    Book  Google Scholar 

  • Clerc, M.: (2006) Particle Swarm Optimization. Wiley, New York

  • Clerc, M., Kennedy, J.: The particle swarm—explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 6(1), 58–73 (2002)

    Article  Google Scholar 

  • de Oca, M.A.M., Stutzle, T., Birattari, M., Dorigo, M.: Frankenstein’s PSO: a composite particle swarm optimization algorithm. IEEE Trans. Evol. Comput. 13(5), 1120–1132 (2009)

    Article  Google Scholar 

  • Duffin, K.L., Barrett, W.A.: Spiders: a new user interface for rotation and visualization of n-dimensional point sets. In: Proceedings of the Conference on Visualization, IEEE Computer Society Press, Los Alamitos, CA (1994)

  • Eberhart, R., Kennedy, J.: A new optimizer using particle swarm theory. In: International Symposium on Micro Machine and Human Science, IEEE (1995)

  • Eberhart, R.C., Shi, Y.: Tracking and optimizing dynamic systems with particle swarms. In: Proceedings IEEE Congress Evolutionary Computation, IEEE (2001)

  • Eiben, A.E., Hinterding, R., Michalewicz, Z.: Parameter control in evolutionary algorithms. IEEE Trans. Evol. Comput. 3(2), 124–141 (1999)

    Article  Google Scholar 

  • Engelbrecht, A.P.: Fundamentals of Computational Swarm Intelligence. Wiley, Hoboken (2005)

    Google Scholar 

  • Ghosh, S., Das, S., Kundu, D., Suresh, K., Panigrahi, B.K., Cui, Z.: An inertia-adaptive particle swarm system with particle mobility factor for improved global optimization. Neural Comput. Appl. 21(2), 237–250 (2010)

    Article  Google Scholar 

  • Hansen, N., Ros, R., Mauny, N., Schoenausr, M, Auger, A.: PSO facing non-separable and ill-conditioned problems. In: INRIA (2008)

  • Helwig, S., Wanka, R.: Particle swarm optimization in high-dimensional bounded search spaces. In: Swarm Intelligence Symposium, IEEE (2007)

  • Helwig, S., Branke, J., Mostaghim, S.: Experimental analysis of bound handling techniques in particle swarm optimization. IEEE Trans. Evol. Comput. 17(2), 259–271 (2013)

    Article  Google Scholar 

  • Hsieh, S.T., Sun, T.Y., Liu, C.C., Tsai, S.J.: Efficient population utilization strategy for particle swarm optimizer. IEEE Trans. Syst. Man Cybern. Part B-Cybern. 39(2), 444–456 (2009)

    Article  Google Scholar 

  • Huang, H., Qin, H., Hao, Z., Lim, A.: Example-based learning particle swarm optimization for continuous optimization. Inf. Sci. (2010)

  • Jiang, M., Luo, Y.P., Yang, S.Y.: Stochastic convergence analysis and parameter selection of the standard particle swarm optimization algorithm. Inf. Process. Lett. 102(1), 8–16 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  • Kennedy, J., Eberhart, R.: Particle swarm optimization. In: International Conference on Neural Networks, IEEE (1995)

  • Krohling, R.A.: Gaussian Swarm: A Novel Particle Swarm Optimization Algorithm. IEEE (2004)

  • Liang, J.J., Qin, A.K., Suganthan, P.N., Baskar, S.: Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 10(3), 281–295 (2006)

    Article  Google Scholar 

  • Mendes, R., Kennedy, J., Neves, J.: The fully informed particle swarm: simpler, maybe better. IEEE Trans. Evol. Comput. 8(3), 204–210 (2004)

    Article  Google Scholar 

  • Nickabadi, A., Ebadzadeh, M.M., Safabakhsh, R.: A novel particle swarm optimization algorithm with adaptive inertia weight. Appl. Soft Comput. 11(4), 3658–3670 (2011)

    Article  Google Scholar 

  • Poli, R.: Analysis of the publications on the applications of particle swarm optimisation. J. Artif. Evol. Appl. 1–10 (2008)

  • Poli, R.: Mean and variance of the sampling distribution of particle swarm optimizers during stagnation. IEEE Trans. Evol. Comput. 13(4), 712–721 (2009)

    Article  Google Scholar 

  • Poli, R., Kennedy, J., Blackwell, T.: Particle swarm optimization an overview. Swarm Intell. 1(1), 33–57 (2007)

    Article  Google Scholar 

  • Pso, I.: PSO Source Code. http://particleswarm.info/Standard_PSO_2006.c (2006)

  • Ratnaweera, A., Halgamuge, S.K., Watson, H.C.: Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans. Evol. Comput. 8(3), 240–255 (2004)

    Article  Google Scholar 

  • Salomon, R.: Reevaluating genetic algorithm performance under coordinate rotation of benchmark functions—a survey of some theoretical and practical aspects of genetic algorithms. BioSystems 39, 263–278 (1995)

    Article  Google Scholar 

  • Secrest, B.R., Lamont, G.B.: Visualizing particle swarm optimization-Gaussian particle swarm optimization. In: Swarm Intelligence Symposium, IEEE (2003)

  • Shi, Y., Eberhart, R.: A modified particle swarm optimizer. In: World Congress on Computational Intelligence, IEEE (1998a)

  • Shi, Y., Eberhart, R.: Parameter Selection in Particle Swarm Optimization. Evolutionary Programming VII. Springer, Berlin (1998b)

    Google Scholar 

  • Spears, W.M., Green, D.T., Spears, D.F.: Biases in particle swarm optimization. Int. J. Swarm Intell. Res. 1(2), 34–57 (2010)

    Article  Google Scholar 

  • Suganthan, P.N., Hansen, N., Liang, J.J., Deb, K., Chen, Y., Auger, A., Tiwari, S.: Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. In: KanGAL Report (2005)

  • Trelea, I.C.: The particle swarm optimization algorithm: convergence analysis and parameter selection. Inf. Process. Lett. 85(6), 317–325 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  • Tu, Z., Lu, Y.: A robust stochastic genetic algorithm (StGA) for global numerical optimization. IEEE Trans. Evol. Comput. 8(5), 456–470 (2004)

    Article  Google Scholar 

  • van den Bergh, F., Engelbrecht, A.: A new locally convergent particle swarm optimiser. In: Systems, Man and Cybernetics, Hammamet, Tunisia, IEEE (2002)

  • Van den Bergh, F., Engelbrecht, A.P.: A study of particle swarm optimization particle trajectories. Inf. Sci. 176(8), 937–971 (2006)

    Article  MATH  Google Scholar 

  • Van den Bergh, F., Engelbrecht, A.P.: A convergence proof for the particle swarm optimiser. Fundamenta Informaticae 105(4), 341–374 (2010)

    MATH  MathSciNet  Google Scholar 

  • Wang, Y., Li, B., Weise, T., Wang, J., Yuan, B., Tian, Q.: Self-adaptive learning based particle swarm optimization. Inf. Sci. 181(20), 4515–4538 (2011)

    Article  MATH  Google Scholar 

  • Wilcoxon, F.: Individual comparisons by ranking methods. Biometr. Bull. 1(6), 80–83 (1945)

    Article  Google Scholar 

  • Wilke, D.: Analysis of the Particle Swarm Optimization Algorithm. Master Thesis, University of Pretoria (2005)

  • Wilke, D.N., Kok, S., Groenwold, A.A.: Comparison of linear and classical velocity update rules in particle swarm optimization: notes on diversity. Int. J. Numer. Methods Eng. 70(8), 962–984 (2007a)

    Article  MATH  MathSciNet  Google Scholar 

  • Wilke, D.N., Kok, S., Groenwold, A.A.: Comparison of linear and classical velocity update rules in particle swarm optimization: notes on scale and frame invariance. Int. J. Numer. Methods Eng. 70(8), 985–1008 (2007b)

    Article  MATH  MathSciNet  Google Scholar 

  • Xinchao, Z.: A perturbed particle swarm algorithm for numerical optimization. Appl. Soft Comput. 10(1), 119–124 (2010)

    Article  Google Scholar 

  • Yao, X., Liu, Y., Lin, G.: Evolutionary programming made faster. IEEE Trans. Evol. Comput. 3(2), 82–102 (1999)

    Article  Google Scholar 

  • Zhang, L.-P., Yu, H.-J., Hu, S.-X.: Optimal choice of parameters for particle swarm optimization. J. Zhejiang Univ. Sci. 6A(6), 528–534 (2005)

    Article  Google Scholar 

  • Zheng, Y., Ma, L., Zhang, L., Qian, J.: Empirical study of particle swarm optimizer with an increasing inertia weight. In: Congress on Evolutionary Computation, IEEE (2003)

Download references

Acknowledgments

The authors would like to extend their great appreciation to the associate editor and anonymous reviewers for constructive comments that have helped us to improve the quality of the paper. Also, the authors extend thanks to Dr. Frank Neumann, Dr. Andrew Sutton, and Dr. Michael Kirley who provided us with excellent comments. This work was partially funded by the ARC Discovery Grants DP0985723, DP1096053, and DP130104395, as well as by Grant N N519 5788038 from the Polish Ministry of Science and Higher Education (MNiSW).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohammad Reza Bonyadi.

Appendices

Appendix I

Appendix I provides the formula for all used benchmark functions. The variable pd represents the number of dimensions.

\(\varvec{f}_{1}\) : Rosenbrock’s function

$$\begin{aligned} f(x)=\sum _{i=1}^{pd-1} {(100(x_{i+1} -x_i ^{2})^{2}+(x_i -1)^{2})} \end{aligned}$$

\(\varvec{f}_{2}\) : Rastrigin’s function

$$\begin{aligned} f(x)=\sum _{i=1}^{pd} {(x_i^2 -10\cos (2\pi x_i )+10)} \end{aligned}$$

\(\varvec{f}_{3}\) : Ackley’s function

$$\begin{aligned} f(x)=20+e-20e^{-0.2\sqrt{\frac{\sum _{i=1}^{pd} {x_i^2 } }{pd}}}-e^{\frac{\sum _{i=1}^{pd} {\cos (2\pi x_i )} }{pd}} \end{aligned}$$

\(\varvec{f}_{4}\) : Weierstrass function

$$\begin{aligned} \begin{array}{l} f(x)=\sum _{i=1}^{pd} {(\sum _{k=0}^{k\max } {[a^{k}\cos (2\pi b^{k}(x_i +0.5))]} )} \\ -pd\sum _{k=0}^{k\max } {[a^{k}\cos (2\pi b^{k}0.5)]} , \\ where_ a=0.5,b=0.3,k\max =20 \\ \end{array} \end{aligned}$$

\(\varvec{f}_{5}\) : Griewank function

$$\begin{aligned} f(x)=1+\frac{\sum _{i=1}^{pd} {(x_i -100)^{2}} }{4000}-\prod _{i=1}^{pd} {\cos (\frac{x_i -100}{\sqrt{i}})} \end{aligned}$$

\(\varvec{f}_{6}\) : Sphere function

$$\begin{aligned} f(x)=\sum _{i=1}^{pd} {x_i^2 } \end{aligned}$$

\(\varvec{f}_{7}\) : Non-continuous Rastrigin’s function

$$\begin{aligned} \begin{array}{l} f(x)=\sum _{i=1}^{pd} {(y_i^2 -10\cos (2\pi y_i )+10)} \\ y_i =\left\{ {{\begin{array}{ll} {x_i }&{} {\left| {x_i } \right| <\frac{1}{2}} \\ {\frac{round(2x_i )}{2}}&{} {\left| {x_i } \right| \ge \frac{1}{2}} \\ \end{array} }} \right. ,\quad i=1,2,\ldots ,pd \\ \end{array} \end{aligned}$$

\(\varvec{f}_{8}\) : Quadratic function

$$\begin{aligned} f(x)=\sum _{i=1}^{pd} {\left( {\sum _{j=1}^i {x_{j}} } \right) ^{2}} \end{aligned}$$

\(\varvec{f}_{9}\) : Generalized penalized function

$$\begin{aligned} f(x)&= 0.1\left\{ {\begin{array}{ll} \sin ^{2}(3\pi x_1 )+\sum _{i=1}^{pd-1} {(x_i -1)^{2}[1+\sin ^{2}(3\pi x_{i+1} )]} + \\ (x_{pd} -1)^{2}[1+\sin ^{2}(2\pi x_{pd} )] \\ \end{array}} \right\} \\&+\sum _{i=1}^{pd} {u(x_i ,5,100,4)} \\ \end{aligned}$$

where

$$\begin{aligned} u(x_i a,k,m)=\left\{ {{\begin{array}{l} {{\begin{array}{ll} {k(x_i -a)^{m}}&{} {x_i >a} \\ \end{array} }} \\ {{\begin{array}{ll} 0&{} {-a\le x_i \le a} \\ \end{array} }} \\ {{\begin{array}{ll} {k(-x_i -a)^{m}}&{} {x_i <a} \\ \end{array} }} \\ \end{array} }} \right. \end{aligned}$$

Appendix II

Appendix II displays optimization curves for some of the benchmark functions (\(f_{1}, f_{2}, f_{3}, f_{5}, f_{6}, f_{8})\) using theStdPSO2006, theStdPSO2006-Rotm,the CoPSO, the CoPSO-Rotm, the AIWPSO, and the AIWPSO-Rotm in 10, 30, and 60 dimensional spaces.

figure a

Appendix III

Appendix III presents the implementation details for calculating Rotm(\(\sigma )\) as given in Eq. (10). Assume we need to multiply \(d\)-dimensional vector \(v\) by Rotm(\(\sigma )\). According to Eq. (10), we have:

$$\begin{aligned} v\times Rotm\left( \sigma \right) =v\times Rot_{1,2} \left( {\alpha _{1,2} } \right) \times Rot_{1,3} \left( {\alpha _{1,3} } \right) \times \cdots \times Rot_{d-1,d} \left( {\alpha _{d-1,d} } \right) \end{aligned}$$

The number of matrices on the right side of the equation is \(d(d-1)/2\). However, according to Eq. (10), each matrix \(Rot_{i,j} \left( {\alpha _{i,j} } \right) \) only contains 4 non-zero values that are in (\(i\), \(i)\), (\(i\), \(j)\), (\(j\), \(i)\), and (\(j\), \(j)\) positions of Rot. Consequently, multiplying \(v\) by Rot can be performed as follows:

$$\begin{aligned} v_k \times Rot_{i,j} \left( {\alpha _{i,j} } \right) =\left\{ {{\begin{array}{ll} {v_k Rot_{i,i} \left( {\alpha _{i,i} } \right) +v_j Rot_{j,i} \left( {\alpha _{j,i} } \right) }&{}\quad {if k=i} \\ {v_k Rot_{j,j} \left( {\alpha _{j,j} } \right) +v_i Rot_{i,j} \left( {\alpha _{j,i} } \right) }&{}\quad {if k=j} \\ {v_k }&{}\quad {otherwise} \\ \end{array} }} \right. \end{aligned}$$

Clearly, multiplication of \(v_{k}\) and \(Rot_{i,j} \left( {\alpha _{i,j} } \right) \) can be done in O(1). Also, this multiplication alters only two indexes in \(v\). This formulation can be repeatedly applied to \(v\) for different matrices \(Rot_{j,i} \left( {\alpha _{j,i} } \right) \). Because the number of these matrices is \(d(d-1)/2\) and the multiplication is in O(1), this multiplication is done in \(O(d^{2})\).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bonyadi, M.R., Michalewicz, Z. & Li, X. An analysis of the velocity updating rule of the particle swarm optimization algorithm. J Heuristics 20, 417–452 (2014). https://doi.org/10.1007/s10732-014-9245-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10732-014-9245-2

Keywords

Navigation