Skip to main content
Log in

Particle swarm optimization with neighborhood-based budget allocation

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

The standard particle swarm optimization (PSO) algorithm allocates the total available budget of function evaluations equally and concurrently among the particles of the swarm. In the present work, we propose a new variant of PSO where each particle is dynamically assigned different computational budget based on the quality of its neighborhood. The main goal is to favor particles with high-quality neighborhoods by asynchronously providing them with more function evaluations than the rest. For this purpose, we define quality criteria to assess a neighborhood with respect to the information it possesses in terms of solutions’ quality and diversity. Established stochastic techniques are employed for the final selection among the particles. Different variants are proposed by combining various quality criteria in a single- or multi-objective manner. The proposed approach is assessed on widely used test suites as well as on a set of real-world problems. Experimental evidence reveals the efficiency of the proposed approach and its competitiveness against other PSO-based variants as well as different established algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Notes

  1. http://sci2s.ugr.es/eamhco/testfunctions-SOCO.

  2. http://sci2s.ugr.es/eamhco/SOCO-results.xls.

References

  1. Akbari R, Ziarati K (2011) A rank based particle swarm optimization algorithm with dynamic adaptation. J Comput Appl Math 235(8):2694–2714

    Article  MathSciNet  MATH  Google Scholar 

  2. Auger A, Hansen N (2005) A restart CMA evolution strategy with increasing population size. In Proc. IEEE Congress On Evolutionary Computation, pages 1769–1776, Edinburgh, UK

  3. Bäck T, Fogel D, Michalewicz Z (1997) Handbook of evolutionary computation. IOP Publishing and Oxford University Press, New York

    Book  MATH  Google Scholar 

  4. Bartz-Beielstein T, Blum D, Branke J (2007) Particle swarm optimization and sequential sampling in noisy environments. vol 39 of In Metaheuristics, Operations Research/Computer Science Interfaces Series, Springer, p 261–273

  5. Clerc M, Kennedy J (2002) The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans Evol Comput 6(1):58–73

    Article  Google Scholar 

  6. Coello Coello CA, Van Veldhuizen DA, Lamont GB (2002) Evolutionary algorithms for solving multi-objective problems. Kluwer, New York

    Book  MATH  Google Scholar 

  7. Duarte A, Mart R, Gortazar F (2011) Path relinking for large-scale global optimization. Soft Comput 15(11):2257–2273

    Article  Google Scholar 

  8. Eshelman J (1991) The chc adaptive search algorithm: How to have safe search when engaging in nontraditional genetic recombination. Foundations of Genetic Algorithms, p 265–283

  9. Grosan C, Abraham A (2008) A new approach for solving nonlinear equations systems. IEEE Trans Syst Man Cybern Part A Syst Hum 38(3):698–714

    Article  Google Scholar 

  10. Jin Y, Olhofer M, Sendhoff B (2001) Evolutionary dynamic weighted aggregation for multiobjective optimization: Why does it work and how? In: Proceedings of the GECCO 2001 Conference, San Francisco, CA, p 1042–1049

  11. Kennedy J (1999) Small worlds and mega-minds: Effects of neighborhood topology on particle swarm performance. In: Proceedings of the 1999 congress on evolutionary computation, Washington, D.C., USA, IEEE Press, p 1931–1938

  12. Kennedy J, Eberhart RC (1995) Particle swarm optimization. In: Proceedings of the IEEE international conference on neural networks, vol IV, Piscataway, NJ, IEEE Service Center p 1942–1948

  13. Kumar R (2014) Directed bee colony optimization algorithm. Swarm Evol Comput 17:60–73

    Article  Google Scholar 

  14. Lozano M, Molina D, Herrera F (2011) Editorial scalability of evolutionary algorithms and other metaheuristics for large-scale continuous optimization problems. Soft Comput 15(11):2085–2087

    Article  Google Scholar 

  15. Ma W, Wang M, Zhu X (2014) Improved particle swarm optimization based approach for bilevel programming problem-an application on supply chain model. Int J Mach Learn Cybern 5(2):281–292

    Article  Google Scholar 

  16. Omran MGH, Mahdavi M (2008) Global-best harmony search. Appl Math Comput 198(2):643–656

    MathSciNet  MATH  Google Scholar 

  17. Pan H, Wang L, Liu B (2006) Particle swarm optimization for function optimization in noisy environment. Appl Math Comput 181(2):908–919

    MathSciNet  MATH  Google Scholar 

  18. Parsopoulos KE, Vrahatis MN (2002) Particle swarm optimization method in multiobjective problems. In: Proceedins of the ACM 2002 Symposium on Applied Computing (SAC 2002), p 603–607, Madrid, Spain. ACM Press

  19. Parsopoulos KE, Vrahatis MN (2010) Particle swarm optimization and intelligence: advances and applications. Information Science Publishing (IGI Global)

  20. Poli R (2007) An analysis of publications on particle swarm optimisation applications. Technical Report CSM-649, University of Essex, Department of Computer Science, UK

  21. Rada-Vilela J, Zhang M, Johnston M (2013) Optimal computing budget allocation in particle swarm optimization. In Proc. 2013 Genetic and Evolutionary Computation Conference (GECCO’13), Amsterdam, Netherlands, p 81–88

  22. Rana S, Jasola S, Kumar R (2013) A boundary restricted adaptive particle swarm optimization for data clustering. Int J Mach Learn Cybern 4(4):391–400

    Article  Google Scholar 

  23. Souravlias D, Parsopoulos KE (2013) Particle swarm optimization with budget allocation through neighborhood ranking. In: Proceedings of the 2013 Genetic and Evolutionary Computation Conference (GECCO’13), p 105–112

  24. Suganthan PN (1999) Particle swarm optimizer with neighborhood operator. In: Proceedings of the IEEE Congress on Evolutionary Computation, Washington, D.C., USA p 1958–1961

  25. Tian N, Lai C-H (2014) Parallel quantum-behaved particle swarm optimization. Int J Mach Learn Cybern 5(2):309–318

    Article  MathSciNet  Google Scholar 

  26. Trelea IC (2003) The particle swarm optimization algorithm: convergence analysis and parameter selection. Inf Process Lett 85:317–325

    Article  MathSciNet  MATH  Google Scholar 

  27. Voglis C, Parsopoulos KE, Lagaris IE (2012) Particle swarm optimization with deliberate loss of information. Soft Comput 16(8):1373–1392

  28. Wan L-Y, Li W (2008) An improved particle swarm optimization algorithm with rank-based selection. In: Proceedings of the IEEE international conference on machine learning and cybernetics, vol 7, pp 4090–4095

  29. Wang X, He Y, Dong L, Zhao H (2011) Particle swarm optimization for determining fuzzy measures from data. Inf Sci 181(19):4230–4252

    Article  MATH  Google Scholar 

  30. Whitley D, Lunacek M, Knight J (2004) Ruffled by ridges: How evolutionary algorithms can fail. In: Deb K et al. (ed.) Lecture Notes in Computer science (LNCS), vol 3103, p 294–306. Springer

  31. Yadav P, Kumar R, Panda SK, Chang CS (2012) An intelligent tuned harmony search algorithm for optimisation. Inf Sci 196:47–72

    Article  Google Scholar 

  32. Zambrano-Bigiarini M, Clerc M, Rojas R (2013) Standard particle swarm optimisation 2011 at cec-2013: a baseline for future pso improvements. In: Proceedings of the IEEE 2013 congress on evolutionary computation, Mexico, p 2337–2344

  33. Zhang S, Chen P, Lee LH, Peng CE, Chen C-H (2011) Simulation optimization using the particle swarm optimization with optimal computing budget allocation. In: Proceedings of the 2011 winter simulation conference, p 4298–4309

Download references

Acknowledgments

The authors wish to thank the editor as well as the anonymous reviewers for their constructive comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to K. E. Parsopoulos.

Appendix: Test problems

Appendix: Test problems

1.1 Standard test suite

The standard test suite consists of the following problems:

Test Problem 0 (TP0—Sphere) [19]. This is a separable \(n\)-dimensional problem, defined as

$$\begin{aligned} f (x) = \sum _{i=1}^{n} x_{i}^{2}, \end{aligned}$$
(21)

and it has a single global minimizer, \(x^{*} = (0,0,\ldots ,0)^{\top }\), with \(f(x^{*}) = 0\).

Test Problem 1 (TP1—Generalized Rosenbrock) [19]. This is a non-separable \(n\)-dimensional problem, defined as

$$\begin{aligned} f(x) = \sum _{i=1}^{n-1} \left( 100 \left( x_{i+1}-x_{i}^{2} \right) ^{2} + \left( x_{i-1} \right) ^{2} \right) , \end{aligned}$$
(22)

and it has a global minimizer, \(x^{*} = (1,1,\ldots ,1)^{\top }\), with \(f(x^{*}) = 0\).

Test Problem 2 (TP2—Rastrigin) [19]. This is a separable \(n\)-dimensional problem, defined as

$$\begin{aligned} f(x) = 10 n + \sum _{i=1}^{n} \big ( x_{i}^{2} - 10 \cos (2\pi x_{i})\big ), \end{aligned}$$
(23)

and it has a global minimizer, \(x^{*} = (0,0,\ldots ,0)^{\top }\), with \(f(x^{*}) = 0\).

Test Problem 3 (TP3—Griewank) [19]. This is a non-separable \(n\)-dimensional problem, defined as

$$\begin{aligned} f(x) = \sum _{i=1}^{n} \frac{x_{i}^{2}}{4000} - \prod _{i=1}^{n} \cos \left( \frac{x_{i}}{\sqrt{i}}\right) + 1, \end{aligned}$$
(24)

and it has a global minimizer, \(x^{*} = (0,0,\ldots ,0)^{\top }\), with \(f(x^{*}) = 0\).

Test Problem 4 (TP4—Ackley) [19]. This is a non-separable \(n\)-dimensional problem, defined as

$$\begin{aligned} f(x)&= 20 + \exp (1) - 20\exp \left( -0.2\sqrt{\frac{1}{n}\sum _{i=1}^{n}x_{i}^{2}}\right) \nonumber \\&- \exp \left( \frac{1}{n}\sum _{i=1}^{n}\cos (2\pi x_{i})\right) , \end{aligned}$$
(25)

and it has a global minimizer, \(x^{*} = (0,0,\ldots ,0)^{\top }\), with \(f(x^{*}) = 0\).

1.2 Nonlinear systems

This test set consists of six real-application problems, which are modeled as systems of nonlinear equations. Computing a solution of a nonlinear system is a very challenging task and it has received the ongoing attention of the scientific community. A common methodology for solving such systems is their transformation to an equivalent global optimization problem, which allows the use of a wide range of optimization tools. The transformation produces a single objective function by aggregating all the system’s equations, such that the solutions of the original system are exactly the same with that of the derived optimization problem.

Consider the system of nonlinear equations:\(\begin{aligned} \left\{ \begin{array}{ll} f_1(x)=0, \\ f_2(x)=0, \\ \qquad \vdots \\ f_m(x)=0, \end{array} \right. \end{aligned}\)with \(x \in S \subset \mathbb {R}^n\). Then, the objective function,

$$\begin{aligned} f(x) = \sum _{i=1}^{m} |f_i(x)|, \end{aligned}$$
(26)

defines an equivalent optimization problem. Obviously, if \(x^*\) with \(f(x^*) = 0\) is a global minimizer of the objective function, then \(x^*\) is also a solution of the corresponding nonlinear system and vice versa.

In our experiments, we considered the following nonlinear systems, previously employed by Grosan and Abraham [9] to justify the usefulness of evolutionary approaches as efficient solvers of nonlinear systems:

Test Problem 5 (TP5—Interval Arithmetic Benchmark) [9]. This problem consists of the following system:

$$\begin{aligned} \left\{ \begin{array}{l} x_1 - 0.25428722 - 0.18324757\ x_4 x_3 x_9 = 0,\\ x_2 - 0.37842197 - 0.16275449\ x_1 x_{10} x_6 = 0,\\ x_3 - 0.27162577 - 0.16955071\ x_1 x_2 x_{10} = 0,\\ x_4 - 0.19807914 - 0.15585316\ x_7 x_1 x_6 = 0,\\ x_5 - 0.44166728 - 0.19950920\ x_7 x_6 x_3 = 0,\\ x_6 - 0.14654113 - 0.18922793\ x_8 x_5 x_{10} = 0,\\ x_7 - 0.42937161 - 0.21180486\ x_2 x_5 x_8 = 0,\\ x_8 - 0.07056438 - 0.17081208\ x_1 x_7 x_6 = 0,\\ x_9 - 0.34504906 - 0.19612740\ x_{10} x_6 x_8 = 0,\\ x_{10} - 0.42651102 - 0.21466544\ x_4 x_8 x_1 =0.\\ \end{array} \right. \end{aligned}$$
(27)

The resulting objective function defined by Eq. (26), is \(10\)-dimensional with global minimum \(f(x^*) = 0\).

Test Problem 6 (TP6—Neurophysiology Application) [9] This problem consists of the following system:

$$\begin{aligned} \left\{ \begin{array}{l} x_1^2 + x_3^2 = 1,\\ x_2^2 + x_4^2 = 1,\\ x_5 x_3^3 + x_6 x_4^3 = c_1,\\ x_5 x_1^3 + x_6 x_2^3 = c_2,\\ x_5 x_1 x_3^2 + x_6 x_4^2 x_2 = c_3,\\ x_5 x_1^2 x_3 + x_6 x_2^2 x_4 = c_4,\\ \end{array} \right. \end{aligned}$$
(28)

where the constants, \(c_i = 0\), \(i=1,2,3,4\). The resulting objective function is \(6\)-dimensional with global minimum \(f(x^*) = 0\).

Test Problem 7 (TP7—Chemical Equilibrium Application) [9] This problem consists of the following system:

$$\begin{aligned} \left\{ \begin{array}{l} x_1 x_2 + x_1 - 3 x_5 = 0,\\ 2 x_1 x_2 + x_1 + x_2 x_3^2 + R_8 x_2 - R x_5 + 2 R_{10} x_2^2 + R_7 x_2 x_3 + \\ R9 x_2 x_4 = 0,\\ 2 x_2 x_3^2 + 2 R_5 x_3^2 - 8 x_5 + R_6 x_3 + R_7 x_2 x_3 = 0,\\ R_9 x_2 x_4 + 2 x_4^2 - 4 R x_5 = 0, \\ x_1(x_2 + 1) + R_{10} x_2^2 + x_2 x_3^2 + R_8 x_2 + R_5 x_3^2 + x_4^2 - 1 \\ + R_6 x_3 + R_7 x_2 x_3 + R_9 x_2 x_4 = 0, \\ \end{array} \right. \end{aligned}$$
(29)

where

$$\begin{aligned} R = 10, \quad R_5 = 0.193, \quad R_6 = \frac{0.002597}{\sqrt{40}}, \quad R_7 = \frac{0.003448}{\sqrt{40}}, \end{aligned}$$
$$\begin{aligned} R_8 = \frac{0.00001799}{40}, \quad R_9 = \frac{0.0002155}{\sqrt{40}}, \quad R_{10} = \frac{0.00003846}{40}. \end{aligned}$$

The corresponding objective function is \(5\)-dimensional with global minimum \(f(x^*) = 0\).

Test Problem 8 (TP8—Kinematic Application) [9] This problem consists of the following system:

$$\begin{aligned} \left\{ \begin{array}{l} x_i^2 + x_{i+1}^2 -1 = 0,\\ a_{1i} x_1 x_3 + a_{2i} x_1 x_4 + a_{3i} x_2 x_3 + a_{4i} x_2 x_4 + a_{5i} x_2 x_7 + \\ a_{6i} x_5 x_8 + a_{7i} x_6 x_7 + a_{8i} x_6 x_8 + a_{9i} x_1 + a_{10i} x_2 + a_{11i} x_3 + \\ a_{12 i} x_4 + a_{13 i} x_5 + a_{14 i} x_6 + a_{15 i} x_7 + a_{16 i} x_8 + a_{17 i} = 0,\\ \end{array} \right. \end{aligned}$$
(30)

with \(a_{ki}\), \(1 \leqslant k \leqslant 17\), \(1 \leqslant i \leqslant 4\), is the corresponding element of the \(k\)-th row and \(i\)-th column of the matrix:

$$\begin{aligned} A = \left[ \begin{array}{llll} -0.249150680&{} 0.125016350&{} -0.635550077&{} 1.48947730 \\ 1.609135400&{} -0.686607360&{} -0.115719920&{} 0.23062341 \\ 0.279423430&{} -0.119228120&{} -0.666404480&{} 1.32810730 \\ 1.434801600&{} -0.719940470&{} 0.110362110&{} -0.25864503 \\ 0.000000000&{} -0.432419270&{} 0.290702030&{} 1.16517200 \\ 0.400263840&{} 0.000000000&{} 1.258776700&{} -0.26908494 \\ -0.800527680&{} 0.000000000&{} -0.629388360&{} 0.53816987 \\ 0.000000000&{} -0.864838550&{} 0.581404060&{} 0.58258598 \\ 0.074052388&{} -0.037157270&{} 0.195946620&{} -0.20816985 \\ -0.083050031&{} 0.035436896&{} -1.228034200&{} 2.68683200 \\ -0.386159610&{} 0.085383482&{} 0.000000000&{} -0.69910317 \\ -0.755266030&{} 0.000000000&{} -0.079034221&{} 0.35744413 \\ 0.504201680&{} -0.039251967&{} 0.026387877&{} 1.24991170 \\ -1.091628700&{} 0.000000000&{} -0.057131430&{} 1.46773600 \\ 0.000000000&{} -0.432419270&{} -1.162808100&{} 1.16517200 \\ 0.049207290&{} 0.000000000&{} 1.258776700&{} 1.07633970 \\ 0.049207290&{} 0.013873010&{} 2.162575000&{} -0.69686809 \\ \end{array} \right] \end{aligned}$$

The corresponding objective function is \(8\)-dimensional with global minimum \(f(x^*) = 0\).

Test Problem 9 (TP9—Combustion Application) [9] This problem consists of the following system:

$$\begin{aligned} \left\{ \begin{array}{l} x_2 + 2 x_6 + x_9 + 2 x_{10} = 10^{-5}, \\ x_3 + x_8 = 3 \times 10^{-5},\\ x_1 + x_3 + 2 x_5 + 2 x_8 + x_9 + x_{10} = 5 \times 10^{-5},\\ x_4 + 2 x_7 = 10^{-5},\\ 0.5140437 \times 10^{-7} x_5 = x_1^2,\\ 0.1006932 \times 10^{-6} x_6 {}={} 2 x_2^2,\\ 0.7816278 \times 10^{-15} x_7 = x_4^2,\\ 0.1496236 \times 10^{-6} x_8 = x_1 x_3,\\ 0.6194411 \times 10^{-7} x_9 = x_1 x_2,\\ 0.2089296 \times 10^{-14} x_{10} = x_1 x_2^2.\\ \end{array} \right. \end{aligned}$$
(31)

The corresponding objective function is \(10\)-dimensional with global minimum \(f(x^*) = 0\).

Test Problem 10 (TP10—Economics Modeling Application) [9] This problem consists of the following system:

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle \left( x_k + \sum _{i=1}^{n-k-1} x_i x_{i+k} \right) x_n - c_k = 0,\\ \displaystyle \sum _{l=1}^{n-1} x_l + 1 = 0, \\ \end{array} \right. \end{aligned}$$
(32)

where \(1 \leqslant k \leqslant n-1\), and \(c_i = 0\), \(i=1,2,\ldots ,n\). The problem was considered in its \(20\)-dimensional instance. Thus, the corresponding objective function was also \(20\)-dimensional, with global minimum \(f(x^*) = 0\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Souravlias, D., Parsopoulos, K.E. Particle swarm optimization with neighborhood-based budget allocation. Int. J. Mach. Learn. & Cyber. 7, 451–477 (2016). https://doi.org/10.1007/s13042-014-0308-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-014-0308-3

Keywords

Navigation