Abstract
The standard particle swarm optimization (PSO) algorithm allocates the total available budget of function evaluations equally and concurrently among the particles of the swarm. In the present work, we propose a new variant of PSO where each particle is dynamically assigned different computational budget based on the quality of its neighborhood. The main goal is to favor particles with high-quality neighborhoods by asynchronously providing them with more function evaluations than the rest. For this purpose, we define quality criteria to assess a neighborhood with respect to the information it possesses in terms of solutions’ quality and diversity. Established stochastic techniques are employed for the final selection among the particles. Different variants are proposed by combining various quality criteria in a single- or multi-objective manner. The proposed approach is assessed on widely used test suites as well as on a set of real-world problems. Experimental evidence reveals the efficiency of the proposed approach and its competitiveness against other PSO-based variants as well as different established algorithms.
Similar content being viewed by others
References
Akbari R, Ziarati K (2011) A rank based particle swarm optimization algorithm with dynamic adaptation. J Comput Appl Math 235(8):2694–2714
Auger A, Hansen N (2005) A restart CMA evolution strategy with increasing population size. In Proc. IEEE Congress On Evolutionary Computation, pages 1769–1776, Edinburgh, UK
Bäck T, Fogel D, Michalewicz Z (1997) Handbook of evolutionary computation. IOP Publishing and Oxford University Press, New York
Bartz-Beielstein T, Blum D, Branke J (2007) Particle swarm optimization and sequential sampling in noisy environments. vol 39 of In Metaheuristics, Operations Research/Computer Science Interfaces Series, Springer, p 261–273
Clerc M, Kennedy J (2002) The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans Evol Comput 6(1):58–73
Coello Coello CA, Van Veldhuizen DA, Lamont GB (2002) Evolutionary algorithms for solving multi-objective problems. Kluwer, New York
Duarte A, Mart R, Gortazar F (2011) Path relinking for large-scale global optimization. Soft Comput 15(11):2257–2273
Eshelman J (1991) The chc adaptive search algorithm: How to have safe search when engaging in nontraditional genetic recombination. Foundations of Genetic Algorithms, p 265–283
Grosan C, Abraham A (2008) A new approach for solving nonlinear equations systems. IEEE Trans Syst Man Cybern Part A Syst Hum 38(3):698–714
Jin Y, Olhofer M, Sendhoff B (2001) Evolutionary dynamic weighted aggregation for multiobjective optimization: Why does it work and how? In: Proceedings of the GECCO 2001 Conference, San Francisco, CA, p 1042–1049
Kennedy J (1999) Small worlds and mega-minds: Effects of neighborhood topology on particle swarm performance. In: Proceedings of the 1999 congress on evolutionary computation, Washington, D.C., USA, IEEE Press, p 1931–1938
Kennedy J, Eberhart RC (1995) Particle swarm optimization. In: Proceedings of the IEEE international conference on neural networks, vol IV, Piscataway, NJ, IEEE Service Center p 1942–1948
Kumar R (2014) Directed bee colony optimization algorithm. Swarm Evol Comput 17:60–73
Lozano M, Molina D, Herrera F (2011) Editorial scalability of evolutionary algorithms and other metaheuristics for large-scale continuous optimization problems. Soft Comput 15(11):2085–2087
Ma W, Wang M, Zhu X (2014) Improved particle swarm optimization based approach for bilevel programming problem-an application on supply chain model. Int J Mach Learn Cybern 5(2):281–292
Omran MGH, Mahdavi M (2008) Global-best harmony search. Appl Math Comput 198(2):643–656
Pan H, Wang L, Liu B (2006) Particle swarm optimization for function optimization in noisy environment. Appl Math Comput 181(2):908–919
Parsopoulos KE, Vrahatis MN (2002) Particle swarm optimization method in multiobjective problems. In: Proceedins of the ACM 2002 Symposium on Applied Computing (SAC 2002), p 603–607, Madrid, Spain. ACM Press
Parsopoulos KE, Vrahatis MN (2010) Particle swarm optimization and intelligence: advances and applications. Information Science Publishing (IGI Global)
Poli R (2007) An analysis of publications on particle swarm optimisation applications. Technical Report CSM-649, University of Essex, Department of Computer Science, UK
Rada-Vilela J, Zhang M, Johnston M (2013) Optimal computing budget allocation in particle swarm optimization. In Proc. 2013 Genetic and Evolutionary Computation Conference (GECCO’13), Amsterdam, Netherlands, p 81–88
Rana S, Jasola S, Kumar R (2013) A boundary restricted adaptive particle swarm optimization for data clustering. Int J Mach Learn Cybern 4(4):391–400
Souravlias D, Parsopoulos KE (2013) Particle swarm optimization with budget allocation through neighborhood ranking. In: Proceedings of the 2013 Genetic and Evolutionary Computation Conference (GECCO’13), p 105–112
Suganthan PN (1999) Particle swarm optimizer with neighborhood operator. In: Proceedings of the IEEE Congress on Evolutionary Computation, Washington, D.C., USA p 1958–1961
Tian N, Lai C-H (2014) Parallel quantum-behaved particle swarm optimization. Int J Mach Learn Cybern 5(2):309–318
Trelea IC (2003) The particle swarm optimization algorithm: convergence analysis and parameter selection. Inf Process Lett 85:317–325
Voglis C, Parsopoulos KE, Lagaris IE (2012) Particle swarm optimization with deliberate loss of information. Soft Comput 16(8):1373–1392
Wan L-Y, Li W (2008) An improved particle swarm optimization algorithm with rank-based selection. In: Proceedings of the IEEE international conference on machine learning and cybernetics, vol 7, pp 4090–4095
Wang X, He Y, Dong L, Zhao H (2011) Particle swarm optimization for determining fuzzy measures from data. Inf Sci 181(19):4230–4252
Whitley D, Lunacek M, Knight J (2004) Ruffled by ridges: How evolutionary algorithms can fail. In: Deb K et al. (ed.) Lecture Notes in Computer science (LNCS), vol 3103, p 294–306. Springer
Yadav P, Kumar R, Panda SK, Chang CS (2012) An intelligent tuned harmony search algorithm for optimisation. Inf Sci 196:47–72
Zambrano-Bigiarini M, Clerc M, Rojas R (2013) Standard particle swarm optimisation 2011 at cec-2013: a baseline for future pso improvements. In: Proceedings of the IEEE 2013 congress on evolutionary computation, Mexico, p 2337–2344
Zhang S, Chen P, Lee LH, Peng CE, Chen C-H (2011) Simulation optimization using the particle swarm optimization with optimal computing budget allocation. In: Proceedings of the 2011 winter simulation conference, p 4298–4309
Acknowledgments
The authors wish to thank the editor as well as the anonymous reviewers for their constructive comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Appendix: Test problems
Appendix: Test problems
1.1 Standard test suite
The standard test suite consists of the following problems:
Test Problem 0 (TP0—Sphere) [19]. This is a separable \(n\)-dimensional problem, defined as
and it has a single global minimizer, \(x^{*} = (0,0,\ldots ,0)^{\top }\), with \(f(x^{*}) = 0\).
Test Problem 1 (TP1—Generalized Rosenbrock) [19]. This is a non-separable \(n\)-dimensional problem, defined as
and it has a global minimizer, \(x^{*} = (1,1,\ldots ,1)^{\top }\), with \(f(x^{*}) = 0\).
Test Problem 2 (TP2—Rastrigin) [19]. This is a separable \(n\)-dimensional problem, defined as
and it has a global minimizer, \(x^{*} = (0,0,\ldots ,0)^{\top }\), with \(f(x^{*}) = 0\).
Test Problem 3 (TP3—Griewank) [19]. This is a non-separable \(n\)-dimensional problem, defined as
and it has a global minimizer, \(x^{*} = (0,0,\ldots ,0)^{\top }\), with \(f(x^{*}) = 0\).
Test Problem 4 (TP4—Ackley) [19]. This is a non-separable \(n\)-dimensional problem, defined as
and it has a global minimizer, \(x^{*} = (0,0,\ldots ,0)^{\top }\), with \(f(x^{*}) = 0\).
1.2 Nonlinear systems
This test set consists of six real-application problems, which are modeled as systems of nonlinear equations. Computing a solution of a nonlinear system is a very challenging task and it has received the ongoing attention of the scientific community. A common methodology for solving such systems is their transformation to an equivalent global optimization problem, which allows the use of a wide range of optimization tools. The transformation produces a single objective function by aggregating all the system’s equations, such that the solutions of the original system are exactly the same with that of the derived optimization problem.
Consider the system of nonlinear equations:\(\begin{aligned} \left\{ \begin{array}{ll} f_1(x)=0, \\ f_2(x)=0, \\ \qquad \vdots \\ f_m(x)=0, \end{array} \right. \end{aligned}\)with \(x \in S \subset \mathbb {R}^n\). Then, the objective function,
defines an equivalent optimization problem. Obviously, if \(x^*\) with \(f(x^*) = 0\) is a global minimizer of the objective function, then \(x^*\) is also a solution of the corresponding nonlinear system and vice versa.
In our experiments, we considered the following nonlinear systems, previously employed by Grosan and Abraham [9] to justify the usefulness of evolutionary approaches as efficient solvers of nonlinear systems:
Test Problem 5 (TP5—Interval Arithmetic Benchmark) [9]. This problem consists of the following system:
The resulting objective function defined by Eq. (26), is \(10\)-dimensional with global minimum \(f(x^*) = 0\).
Test Problem 6 (TP6—Neurophysiology Application) [9] This problem consists of the following system:
where the constants, \(c_i = 0\), \(i=1,2,3,4\). The resulting objective function is \(6\)-dimensional with global minimum \(f(x^*) = 0\).
Test Problem 7 (TP7—Chemical Equilibrium Application) [9] This problem consists of the following system:
where
The corresponding objective function is \(5\)-dimensional with global minimum \(f(x^*) = 0\).
Test Problem 8 (TP8—Kinematic Application) [9] This problem consists of the following system:
with \(a_{ki}\), \(1 \leqslant k \leqslant 17\), \(1 \leqslant i \leqslant 4\), is the corresponding element of the \(k\)-th row and \(i\)-th column of the matrix:
The corresponding objective function is \(8\)-dimensional with global minimum \(f(x^*) = 0\).
Test Problem 9 (TP9—Combustion Application) [9] This problem consists of the following system:
The corresponding objective function is \(10\)-dimensional with global minimum \(f(x^*) = 0\).
Test Problem 10 (TP10—Economics Modeling Application) [9] This problem consists of the following system:
where \(1 \leqslant k \leqslant n-1\), and \(c_i = 0\), \(i=1,2,\ldots ,n\). The problem was considered in its \(20\)-dimensional instance. Thus, the corresponding objective function was also \(20\)-dimensional, with global minimum \(f(x^*) = 0\).
Rights and permissions
About this article
Cite this article
Souravlias, D., Parsopoulos, K.E. Particle swarm optimization with neighborhood-based budget allocation. Int. J. Mach. Learn. & Cyber. 7, 451–477 (2016). https://doi.org/10.1007/s13042-014-0308-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13042-014-0308-3