Skip to main content
Log in

Inertia weight control strategies for particle swarm optimization

Too much momentum, not enough analysis

  • Published:
Swarm Intelligence Aims and scope Submit manuscript

Abstract

Particle swarm optimization (PSO) is a population-based, stochastic optimization technique inspired by the social dynamics of birds. The PSO algorithm is rather sensitive to the control parameters, and thus, there has been a significant amount of research effort devoted to the dynamic adaptation of these parameters. The focus of the adaptive approaches has largely revolved around adapting the inertia weight as it exhibits the clearest relationship with the exploration/exploitation balance of the PSO algorithm. However, despite the significant amount of research efforts, many inertia weight control strategies have not been thoroughly examined analytically nor empirically. Thus, there are a plethora of choices when selecting an inertia weight control strategy, but no study has been comprehensive enough to definitively guide the selection. This paper addresses these issues by first providing an overview of 18 inertia weight control strategies. Secondly, conditions required for the strategies to exhibit convergent behaviour are derived. Finally, the inertia weight control strategies are empirically examined on a suite of 60 benchmark problems. Results of the empirical investigation show that none of the examined strategies, with the exception of a randomly selected inertia weight, even perform on par with a constant inertia weight.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  • Bansal, J. C., Singh, P. K., Saraswat, M., Vermam A., Jadon, S. S., & Abraham, A. (2011). Inertia weight strategies in particle swarm. In Proceedings of the third world congress on nature and biologically inspired computing (pp. 633–640). IEEE.

  • Beielstein, T., Parsopoulos, K. E., & Vrahatis, M. N. (2002). Tuning PSO parameters through sensitivity analysis. Technical report. Universitat Dortmund.

  • Bonyadi, M. R., & Michalewicz, Z. (2016). Particle swarm optimization for single objective continuous space problems: A review. Evolutionary Computation. doi:10.1162/EVCO_r_00180.

  • Carlisle, A., & Dozier, G. (2001). An off-the-shelf PSO. In Proceedings of the workshop on particle swarm optimization (pp. 1–6). Indianapolis.

  • Chatterjee, A., & Siarry, P. (2006). Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization. Computers & Operations Research, 33(3), 859–871.

    Article  MATH  Google Scholar 

  • Chauhan, P., Deep, K., & Pant, M. (2013). Novel inertia weight strategies for particle swarm optimization. Memetic Computing, 5(3), 229–251.

    Article  Google Scholar 

  • Chen, G., Min, Z., Jia, J., & Xinbo, H. (2006). Natural exponential inertia weight strategy in particle swarm optimization. In Proceedings of the 6th world congress on intelligent control and automation (Vol. 1, pp. 3672–3675).

  • Chen, H. H., Li, G. Q., & Liao, H. L. (2009). A self-adaptive improved particle swarm optimization algorithm and its application in available transfer capability calculation. In Proceedings of the fifth international conference on natural computation (Vol. 3, pp. 200–205).

  • Cleghorn, C. W., & Engelbrecht, A. P. (2014a). Particle swarm convergence: An empirical investigation. In Proceedings of the 2014 IEEE congress on evolutionary computation (pp. 2524–2530).

  • Cleghorn, C. W., & Engelbrecht, A. P. (2014b). Particle swarm convergence: Standardized analysis and topological influence. In M. Dorigo, M. Birattari, S. Garnier, H. Hamann, M. de Oca, C. Solnon, & T. Sttzle (Eds.), Swarm intelligence (Vol. 8667, pp. 134–145). Lecture Notes in Computer Science. Springer International Publishing.

  • Cleghorn, C. W., & Engelbrecht, A. P. (2015). Particle swarm variants: Standardized convergence analysis. Swarm Intelligence, 9(2–3), 177–203.

    Article  Google Scholar 

  • de Oca, M., Pena, J., Stutzle, T., Pinciroli, C., & Dorigo, M. (2009). Heterogeneous particle swarm optimizers. In Proceedings of the 2009 IEEE congress on evolutionary computation (pp. 698–705).

  • Deep, K., Chauhan, P., & Pant, M. (2011). A new fine grained inertia weight particle swarm optimization. In Proceedings of the 2011 world congress on information and communication technologies (pp. 424–429). IEEE.

  • Eberhart, R., & Shi, Y. (2000). Comparing inertia weights and constriction factors in particle swarm optimization. In Proceedings of the 2000 IEEE congress on evolutionary computation (Vol. 1, pp. 84–88). IEEE.

  • Eberhart, R. C., & Kennedy, J. (1995). A new optimizer using particle swarm theory. In Proceedings of the sixth international symposium on micro machine and human science (Vol. 1, pp. 39–43). New York, NY.

  • Eberhart, R. C., & Shi, Y. (2001). Tracking and optimizing dynamic systems with particle swarms. In Proceedings of the 2001 IEEE congress on evolutionary computation (Vol. 1, pp. 94–100). IEEE.

  • Engelbrecht, A. P. (2012). Particle swarm optimization: Velocity initialization. In Proceedings of the 2012 IEEE congress on evolutionary computation (pp. 1–8).

  • Engelbrecht, A. P. (2013a). Particle swarm optimization: Global best or local best? In Proceedings of the 2013 BRICS congress on computational intelligence and 11th Brazilian congress on computational intelligence (pp. 124–135). IEEE.

  • Engelbrecht, A. P. (2013b). Roaming behavior of unconstrained particles. In Proceedings of the 2013 BRICS congress on computational intelligence and 11th Brazilian congress on computational intelligence (pp. 104–111).

  • Fan, S. K. S., & Chiu, Y. Y. (2007). A decreasing inertia weight particle swarm optimizer. Engineering Optimization, 39(2), 203–228.

    Article  MathSciNet  Google Scholar 

  • Feng, Y., Teng, G. F., Wang, A. X., & Yao, Y. M. (2007). Chaotic inertia weight in particle swarm optimization. In Proceedings of the second international conference on innovative computing. Information and control (pp. 475–479). IEEE.

  • Gao, Y. L., An, X. H., & Liu, J. M. (2008). A particle swarm optimization algorithm with logarithm decreasing inertia weight and chaos mutation. In Proceedings of the 2008 international conference on computational intelligence and security (pp. 61–65). IEEE.

  • Garden, R. W., & Engelbrecht, A. P. (2014). Analysis and classification of optimisation benchmark functions and benchmark suites. In Proceedings of the 2014 IEEE congress on evolutionary computation (Vol. 1, pp. 1641–1649).

  • Harrison, K. R., Engelbrecht, A. P., & Ombuki-Berman, B. M. (2016). The sad state of self-adaptive particle swarm optimizers. In Proceedings of the 2016 IEEE congress on evolutionary computation (pp. 431–439). IEEE.

  • Hu, J. Z., Xu, J., Wang, J. Q., & Xu, T. (2009). Research on particle swarm optimization with dynamic inertia weight. In Proceedings of the 2009 international conference on management and service science (Vol. 3, pp. 1–4).

  • Jiao, B., Lian, Z., & Gu, X. (2008). A dynamic inertia weight particle swarm optimization algorithm. Chaos, Solitons and Fractals, 37(3), 698–705.

    Article  MATH  Google Scholar 

  • Kennedy, J., & Eberhart, R. C. (1995). Particle swarm optimization. In Proceedings of the 1995 IEEE international joint conference on neural networks (Vol. IV, pp 1942–1948).

  • Kentzoglanakis, K., & Poole, M. (2009). Particle swarm optimization with an oscillating inertia weight. In Proceedings of the 11th annual conference on genetic and evolutionary computation (pp. 1749–1750). ACM.

  • Lei, K., Qiu, Y., & He, Y. (2006). A new adaptive well-chosen inertia weight strategy to automatically harmonize global and local search ability in particle swarm optimization. In Proceedings of the 1st international symposium on systems and control in aerospace and astronautics (pp. 977–980). IEEE.

  • Leonard, B. J., & Engelbrecht, A. P. (2013). On the optimality of particle swarm parameters in dynamic environments. In Proceedings of the 2013 IEEE congress on evolutionary computation (pp. 1564–1569). doi:10.1109/CEC.2013.6557748.

  • Li, C., Yang, S., & Nguyen, T. T. (2012). A self-learning particle swarm optimizer for global optimization problems. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 42(3), 627–646.

    Article  Google Scholar 

  • Li, Z., & Tan, G. (2008). A self-adaptive mutation-particle swarm optimization algorithm. In Proceedings of the fourth international conference on natural computation (Vol. 1, pp. 30–34). IEEE.

  • Liu, B., Wang, L., Jin, Y. H., Tang, F., & Huang, D. X. (2005). Improved particle swarm optimization combined with chaos. Chaos, Solitons & Fractals, 25(5), 1261–1271.

    Article  MATH  Google Scholar 

  • Liu, Q., Wei, W., Yuan, H., Zhan, Z. H., & Li, Y. (2016). Topology selection for particle swarm optimization. Information Sciences, 363(2015), 154–173. doi:10.1016/j.ins.2016.04.050.

    Article  Google Scholar 

  • Lynn, N., & Suganthan, P. N. (2015). Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. Swarm and Evolutionary Computation, 24, 11–24.

    Article  Google Scholar 

  • Mascia, F., Pellegrini, P., Stützle, T., & Birattari, M. (2014). An analysis of parameter adaptation in reactive tabu search. International Transactions in Operational Research, 21(1), 127–152.

    Article  MATH  Google Scholar 

  • Nepomuceno, F. V., & Engelbrecht, A. P. (2013). A self-adaptive heterogeneous PSO for real-parameter optimization. In Proceedings of the 2013 IEEE congress on evolutionary computation (pp. 361–368). IEEE.

  • Nickabadi, A., Ebadzadeh, M. M., & Safabakhsh, R. (2011). A novel particle swarm optimization algorithm with adaptive inertia weight. Applied Soft Computing, 11(4), 3658–3670.

    Article  Google Scholar 

  • Pandey, B. B., Debbarma, S., & Bhardwaj, P. (2015). Particle swarm optimization with varying inertia weight for solving nonlinear optimization problem. In Proceedings of the 2015 international conference on electrical, electronics, signals, communication and optimization (pp. 1–5). IEEE.

  • Panigrahi, B. K., Ravikumar Pandi, V., & Das, S. (2008). Adaptive particle swarm optimization approach for static and dynamic economic load dispatch. Energy Conversion and Management, 49(6), 1407–1415.

    Article  Google Scholar 

  • Pellegrini, P., Stützle, T., & Birattari, M. (2012). A critical analysis of parameter adaptation in ant colony optimization. Swarm Intelligence, 6(1), 23–48.

    Article  Google Scholar 

  • Poli, R. (2009). Mean and variance of the sampling distribution of particle swarm optimizers during stagnation. IEEE Transactions on Evolutionary Computation, 13(4), 712–721.

    Article  Google Scholar 

  • Poli, R., & Broomhead, D. (2007). Exact analysis of the sampling distribution for the canonical particle swarm optimiser and its convergence during stagnation. In Proceedings of the 9th annual conference on genetic and evolutionary computation (pp. 134–141). New York, NY: ACM.

  • Salomon, R. (1996). Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions. BioSystems, 39(3), 263–278.

    Article  Google Scholar 

  • Shi, Y., & Eberhart, R. (1998). A modified particle swarm optimizer. In Proceedings of the 1998 IEEE international conference on evolutionary computation, (pp. 69–73).

  • Shi, Y., & Eberhart, R. C. (1999). Empirical study of particle swarm optimization. In Proceedings of the 1999 IEEE congress on evolutionary computation (Vol. 3, pp. 1945–1950). IEEE.

  • Suganthan, P. N., Hansen, N., Liang, J. J., Deb, K., Chen, Y., & Auger, A., et al. (2005). Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization. Technical report. Nanyang Technological University.

  • Taherkhani, M., & Safabakhsh, R. (2016). A novel stability-based adaptive inertia weight for particle swarm optimization. Applied Soft Computing, 38(4), 281–295.

    Article  Google Scholar 

  • Tanweer, M. R., Suresh, S., & Sundararajan, N. (2015). Self regulating particle swarm optimization algorithm. Information Sciences, 294, 182–202.

    Article  MathSciNet  Google Scholar 

  • Trelea, I. C. (2003). The particle swarm optimization algorithm: Convergence analysis and parameter selection. Information Processing Letters, 85(6), 317–325.

    Article  MathSciNet  MATH  Google Scholar 

  • Van Den Bergh, F., & Engelbrecht, A. P. (2006). A study of particle swarm optimization particle trajectories. Information Sciences, 176(8), 937–971.

    Article  MathSciNet  MATH  Google Scholar 

  • Van Zyl, E., & Engelbrecht, A. (2014). Comparison of self-adaptive particle swarm optimizers. In Proceedings of the 2014 IEEE symposium on swarm intelligence (pp. 48–56).

  • Wang, Y., Li, B., Weise, T., Wang, J., Yuan, B., & Tian, Q. (2011). Self-adaptive learning based particle swarm optimization. Information Sciences, 181(20), 4515–4538.

    Article  MathSciNet  MATH  Google Scholar 

  • Xu, G. (2013). An adaptive parameter tuning of particle swarm optimization algorithm. Applied Mathematics and Computation, 219(9), 4560–4569.

    Article  MathSciNet  MATH  Google Scholar 

  • Yang, C., Gao, W., Liu, N., & Song, C. (2015). Low-discrepancy sequence initialized particle swarm optimization algorithm with high-order nonlinear time-varying inertia weight. Applied Soft Computing, 29, 386–394.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kyle Robert Harrison.

Additional information

This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).

Appendices

Appendix 1: Benchmark problems

This appendix provides a further, in-depth definition of the benchmark problems employed in this study.

A function, f, was shifted using

$$\begin{aligned} f^\text {{Sh}}(\mathbf x ) = f(\mathbf x -\gamma ) + \beta \end{aligned}$$

where \(\beta \) and \(\gamma \) are constants. Rotation was implemented using either a randomly generated orthonormal rotation matrix, denoted by ‘ortho’, or a linear transformation matrix, denoted by ‘linear’. In either scenario, the rotation was performed using Salomon’s method (Salomon 1996) with a new rotation matrix computed for each of the independent runs using the condition number provided. The rotated functions, denoted by \(f^\text {{R}}\), were then computed by multiplying the decision vector \(\mathbf x \) by the transpose of the rotation matrix. Noisy functions, denoted by \(f^\text {N}\), were generated by multiplying each decision variable by a noise value sampled from a Gaussian distribution with the specified mean and deviation. Table 2 provides the configuration parameters for the shifted, rotated, rotated and shifted, and noisy versions of the functions in the columns ‘Sh’, ‘R’, ‘ShR’, and ‘N’, respectively.

It should be noted that the composition functions \(f_{27} - f_{37}\) are equal to functions \(f_{15} - f_{25}\) from the CEC 2005 benchmark set (Suganthan et al. 2005). For each of these problems, the composed functions are each rotated using linear transformation matrices with independent condition numbers. Thus, the reader is directed to Suganthan et al. (2005) for specific details regarding the configurations of these functions.

The equations for each benchmark problem are as follows.

\(f_1,\) :

the absolute value function, defined as

$$\begin{aligned} f_1(\mathbf x ) = \sum _{j=1}^{n_x} |x_j| \end{aligned}$$
(38)

with each \(x_j \in [-100,100]\).

\(f_2,\) :

the Ackley function, defined as

$$\begin{aligned} f_2(\mathbf x ) = -20e^{-0.2\sqrt{\frac{1}{n}\sum _{j=1}^{n_x} x_j^2}} - e^{\frac{1}{n}\sum _{j=1}^{n_x}\cos (2 \pi x_j)} + 20 + e \end{aligned}$$
(39)

with each \(x_j \in [-32.768, 32.768]\). Shifted, rotated, and rotated and shifted versions of \(f_{2}\) were also used.

\(f_3,\) :

the alpine function, defined as

$$\begin{aligned} f_3(\mathbf x ) = \left( \prod _{j=1}^{n_x} \sin (x_j) \right) \sqrt{\prod _{j=1}^{n_x} x_j} \end{aligned}$$
(40)

with each \(x_j \in [-10, 10]\).

\(f_4,\) :

the egg holder function, defined as

$$\begin{aligned} f_4(\mathbf x ) = \sum _{j=1}^{n_x - 1} \left( -(x_{j+1} + 47)\sin \left( \sqrt{|x_{j+1} + x_j/2 + 47|}\right) + \sin \left( \sqrt{|x_j - (x_{j+1} + 47)| }\right) (-x_j) \right) \end{aligned}$$
(41)

with each \(x_j \in [-512,512]\).

\(f_5,\) :

the elliptic function, defined as

$$\begin{aligned} f_5(\mathbf x ) = \sum _{j=1}^{n_x} (10^6)^\frac{j-1}{n_x-1} \end{aligned}$$
(42)

with each \(x_j \in [-100, 100]\). Shifted, rotated, and rotated and shifted versions of \(f_{5}\) were also used.

\(f_6,\) :

the Griewank function, defined as

$$\begin{aligned} f_6(\mathbf x ) = 1 + \frac{1}{4000} \sum _{j=1}^{n_x} x_j^2 - \prod _{j=1}^{n_x} \cos \left( \frac{x_j}{\sqrt{j}} \right) \end{aligned}$$
(43)

with each \(x_j \in [-600,600]\). Shifted, rotated, and rotated and shifted versions of \(f_{6}\) were also used. For the rotated and shifted version of \(f_6\), the range was modified to \(x_j \in [0, 600]\) such that the global minimum was outside the bounds.

\(f_7,\) :

the hyperellipsoid function, defined as

$$\begin{aligned} f_7(\mathbf x ) = \sum _{j=1}^{n_x} j x_j^2 \end{aligned}$$
(44)

with each \(x_j \in [-5.12,5.12]\).

\(f_8,\) :

the Michalewicz function, defined as

$$\begin{aligned} f_8(\mathbf x ) = -\sum _{j=1}^{n_x} \sin (x_j)\left( \sin \left( \frac{j x_j^2}{\pi } \right) \right) ^{2m} \end{aligned}$$
(45)

with each \(x_j \in [0,\pi ]\) and \(m=10\).

\(f_9,\) :

the norwegian function, defined as

$$\begin{aligned} f_9(\mathbf x ) = \prod _{j=1}^{n_x} \left( \cos (\pi x_j^3) \left( \frac{99+x_j}{100} \right) \right) \end{aligned}$$
(46)

with each \(x_j \in [-1.1,1.1]\).

\(f_{10},\) :

the quadric function, defined as

$$\begin{aligned} f_{10}(\mathbf x ) = \sum _{i=1}^{n_x} \left( \sum _{j=1}^{i} x_j \right) ^2 \end{aligned}$$
(47)

with each \(x_j \in [-100,100]\).

\(f_{11},\) :

the quartic function, defined as

$$\begin{aligned} f_{11}(\mathbf x ) = \sum _{j=1}^{n_x} j x_j^4 \end{aligned}$$
(48)

with each \(x_j \in [-1.28,1.28]\). A noisy version of the quartic function, referred to as De Jong’s f4 function, was generated according to

$$\begin{aligned} f_{11}^N(\mathbf x ) = \sum _{j=1}^{n_x} (j x_j^4 + N(0, 1)) \end{aligned}$$
(49)

using the same domain as the quartic function.

\(f_{12},\) :

the Rastrigin function, defined as

$$\begin{aligned} f_{12}(\mathbf x ) = 10n_x + \sum _{j=1}^{n_x} (x_j^2 - 10\cos (2 \pi x_j)) \end{aligned}$$
(50)

with each \(x_j \in [-5.12,5.12]\). Shifted, rotated, and rotated and shifted versions of \(f_{12}\) were also used.

\(f_{13},\) :

the Rosenbrock function, defined as

$$\begin{aligned} f_{13}(\mathbf x ) = \sum _{j=1}^{n_x-1} \left( 100(x_{j+1} - x_j^2) + (x_j - 1)^2\right) \end{aligned}$$
(51)

with each \(x_j \in [-30,30]\). Shifted and rotated versions of \(f_{13}\) were also used. Both the rotated and shifted versions of the \(f_{13}\) used the domain \([-100,100]\).

\(f_{14},\) :

the Saloman function, defined as

$$\begin{aligned} f_{14}(\mathbf x ) = -\cos \left( 2 \pi \sum _{j=1}^{n_x} x_j^2 \right) + 0.1 \sqrt{\sum _{j=1}^{n_x} x_j^2} + 1 \end{aligned}$$
(52)

with each \(x_j \in [-100,100]\).

\(f_{15},\) :

the Schaffer 6 function, defined as

$$\begin{aligned} f_{15}(\mathbf x ) = \sum _{j=1}^{n_x} \left( 0.5 + \frac{\sin ^2(x_j^2 + x_{j+1}^2) - 0.5}{(1+0.001(x_j^2 + x_{j+1}^2))^2} \right) \end{aligned}$$
(53)

with each \(x_j \in [-100,100]\). A rotated and shifted version of \(f_{15}\) was also used.

\(f_{16},\) :

the Schwefel 1.2 function, defined as

$$\begin{aligned} f_{16}(\mathbf x ) = \sum _{i=1}^{n_x}\left( \sum _{j=1}^{i} x_j\right) ^2 \end{aligned}$$
(54)

with each \(x_j \in [-100,100]\). Shifted and rotated versions of \(f_{16}\) were also used. Additionally, a shifted, noisy version of the Schwefel 1.2 function, defined as

$$\begin{aligned} f_{16}^{ShN}(\mathbf x ) = \sum _{i=1}^{n_x}\left( \sum _{j=1}^{i} x_j\right) ^2 (1 + 0.4|N(0,1)|) \end{aligned}$$
(55)

was also used, with the same domain as the base function.

\(f_{17},\) :

the Schwefel 2.6 function, defined as

$$\begin{aligned} f_{17}(\mathbf x ) = \max _j\{|\mathbf A _j\mathbf x - \mathbf B _j|\} \end{aligned}$$
(56)

with each \(x_j \in [-100,100]\), each \(a_{ij} \in \mathbf A \) is uniformly sampled from \(U(-500,500)\) such that \(\det (\mathbf A ) \ne 0\), and each \(\mathbf B _j = \mathbf A _j\mathbf r \) where each \(r_i \in \mathbf r \) is uniformly sampled from \(U(-100,100)\). A shifted version of \(f_{17}\) was also used.

\(f_{18},\) :

the Schwefel 2.13 function, defined as

$$\begin{aligned} f_{18}(\mathbf x ) = \sum _{j=1}^{n_x} (\mathbf A _j - \mathbf B _j(\mathbf x ))^2 \end{aligned}$$
(57)

with each \(x_j \in [-\pi ,\pi ]\), and

$$\begin{aligned} \mathbf A _j = \sum _{i=1}^{n_x}(a_{ij}\sin (\alpha _i) + b_{ij}\cos (\alpha _i)) \end{aligned}$$

and

$$\begin{aligned} \mathbf B _j(\mathbf x ) = \sum _{i=1}^{n_x}(a_{ij}\sin (x_i) + b_{ij}\cos (x_i)) \end{aligned}$$

where \(a_{ij} \in \mathbf A , b_{ij} \in \mathbf B \), \(a_{ij}, b_{ij} \sim U(-100,100)\), and \(\alpha _i \sim U(-\pi , \pi )\). A shifted version of \(f_{18}\) was also used.

\(f_{19},\) :

the Schwefel 2.21 function, defined as

$$\begin{aligned} f_{19}(\mathbf x ) = \max _j\{|x_j|, 1 \le j \le n_x\} \end{aligned}$$
(58)

with each \(x_j \in [-100,100]\).

\(f_{20},\) :

the Schwefel 2.22 function, defined as

$$\begin{aligned} f_{20}(\mathbf x ) = \sum _{j=1}^{n_x}|x_j| + \prod _{j=1}^{n_x}|x_j| \end{aligned}$$
(59)

with each \(x_j \in [-10,10]\).

\(f_{21},\) :

the Shubert function, defined as

$$\begin{aligned} f_{21}(\mathbf x ) = \prod _{j=1}^{n_x}\left( \sum _{i=1}^{5} (i\cos ((i+1)x_j + i)) \right) \end{aligned}$$
(60)

with each \(x_j \in [-10,10]\).

\(f_{22},\) :

the spherical function, defined as

$$\begin{aligned} f_{22}(\mathbf x ) = \sum _{j=1}^{n_x} x_j^2 \end{aligned}$$
(61)

with each \(x_j \in [-5.12,5.12]\). A shifted version of \(f_{22}\) was also used.

\(f_{23},\) :

the step function, defined as

$$\begin{aligned} f_{23}(\mathbf x ) = \sum _{j=1}^{n_x} (\lfloor x_j + 0.5 \rfloor )^2 \end{aligned}$$
(62)

with each \(x_j \in [-100,100]\).

\(f_{24},\) :

the Vincent function, defined as

$$\begin{aligned} f_{24}(\mathbf x ) = - \left( 1 + \sum _{j=1}^{n_x} \sin (10 \sqrt{x_j})\right) \end{aligned}$$
(63)

with each \(x_j \in [0.25,10]\).

\(f_{25},\) :

the Weierstrass function, defined as

$$\begin{aligned} \begin{aligned} f_{25}(\mathbf x ) =&\sum _{j=1}^{n_x}\left( \sum _{i=1}^{20} (a^i\cos (2 \pi b^i(x_j + 0.5))) \right) \\&-n_x\sum _{i=1}^{20}(a^i\cos (\pi b^i)) \end{aligned} \end{aligned}$$
(64)

with each \(x_j \in [-0.5,0.5], a=0.5\), and \(b=3\). A rotated and shifted version of \(f_{25}\) was also used.

\(f_{26},\) :

a shifted expansion of the Griewank and Rosenbrock functions [Eqs. (43) and (51), respectively] with each \(x_j \in [-3,1]\). Note that \(f_{26}\) is equivalent to \(f_{13}\) from the 2005 CEC benchmark suite.

\(f_{27} - f_{37},\) :

composition functions equivalent to \(f_{15} - f_{25}\) from the 2005 CEC benchmark suite. All functions have each \(x_j \in [-5,5]\), with the exception of \(f_{37}\) which has each \(x_j \in [2,5]\).

Appendix 2: Overall ranks using the local-best topology

Table 11 summarizes the results obtained across all benchmark problems using the Mann–Whitney U statistical analysis procedure described in Sect. 5.3. Table 11 clearly indicates that the influence of topology is significant with regard to the performance of the inertia weight strategies. Specifically, the constant and random strategies are no longer the top performing strategies when the local-best topology is considered. Rather, the constant strategy attained a rank of 5 and the random strategy attained a rank of 9 when considering the accuracy performance measure. Despite its relatively poor rank of 13 when using the global-best topology, the PSO-NL strategy showed the best overall accuracy when using the local-best topology. Thus, the poor performance of the adaptive inertia weight strategies observed when the global-best topology was used, does not necessarily hold when the local-best topology is used. Therefore, it can be concluded that the topology should be considered when tuning for optimal performance. This comes as no surprise given that it is well known that the topology should be included as a tuned parameter if optimal performance is desired Engelbrecht (2013a).

Table 11 Summary of performance across all 60 benchmark problems using the local-best topology

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Harrison, K.R., Engelbrecht, A.P. & Ombuki-Berman, B.M. Inertia weight control strategies for particle swarm optimization. Swarm Intell 10, 267–305 (2016). https://doi.org/10.1007/s11721-016-0128-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11721-016-0128-z

Keywords

Navigation