Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter April 21, 2016

Particle Swarm Optimization with Enhanced Global Search and Local Search

  • Jie Wang and Hongwen Li EMAIL logo

Abstract

In order to mitigate the problems of premature convergence and low search accuracy that exist in traditional particle swarm optimization (PSO), this paper presents PSO with enhanced global search and local search (EGLPSO). In EGLPSO, most of the particles would be concentrated in global search at the beginning. Along with the iteration, the particles would slowly focus on local search. A new updating strategy would be used for global search, and a partial mutation strategy is applied to the leader particle of local search for a better position. During each iteration, the best particle of global search would exchange information with some particles of local search. EGLPSO is tested on a set of 12 benchmark functions, and it is also compared with other four PSO variants and another six well-known PSO variants. The experimental results showed that EGLPSO can greatly improve the performance of traditional PSO in terms of search accuracy, search efficiency, and global optimality.

1 Introduction

Particle swarm optimization (PSO) [10] is a swarm intelligence technique that was first proposed by Kennedy and Eberhart in 1995. This method has a good performance in solving optimization problems. The PSO algorithm is easily understood and realized, so it is widely used in many areas. For example, it can be used in function optimization, artificial neural network training, power system [6, 9], fuzzy system control, and so on. Moreover, it also can be used to solve discrete optimization problems [8] and dynamic optimization problems [3]. This algorithm is inspired by the social behavior of a flock of birds searching for food. The PSO algorithm is composed by a group of particles. The position of each particle is a potential solution in the multidimensional search space. Each particle has its own fitness value and velocity. Every particle would update their position relying on its own experience and other members’ experiences.

The PSO algorithm is a kind of population-based metaheuristics [14]. In population-based metaheuristics, if an individual discovers a better position, all the other individuals will move closer to it. However, the problem is that if the position is local optima, the population will get trapped in premature convergence. Thus, the PSO may easily get trapped in premature convergence [13]. What is more, it would be worse in dealing with complex, multiple-peak search problems. To mitigate the problem of premature convergence, many improvements have been presented.

To improve the performance of standard PSO, Clerc and Kennedy [1] introduced a PSO variant with a constriction factor. In this PSO variant, the fixed inertia weight has been completely replaced by a constriction factor. This variant improves the search accuracy of standard PSO on some functions. In comparison with standard PSO, a linearly decreasing inertia weight strategy is applied to the PSO. This PSO variant was proposed by Shi and Eberhart [15]. To relieve the premature convergence problem and improve the search accuracy, Jordehi [5] proposed another PSO variant, named as enhanced leader PSO. This new variant is mainly based on a five-staged successive mutation strategy that is applied to the leader particle at each iteration. van den Bergh and Engelbrecht [17] presented a variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer. In this variant, multiple swarms are applied to optimize different components of the solution vector cooperatively. The cooperative multiple swarms are not easier to get trapped in premature convergence. This variant can greatly improve solution quality and robustness. Zhan and Zhang [21] used an orthogonal learning strategy for PSO to discover more useful information that lies in the above two experiences via an orthogonal experimental design. In this variant, a particle learns not only from its own experience but also from its neighbors’ experiences. The diversity of learning strategies can lead to better results and mitigate the problem of premature convergence. Mendes et al. [12] introduced a fully informed particle swarm. In this variant, all neighbors are a source of influence. Using proper neighborhood topologies, this variant greatly improves solution accuracy and relieves the premature convergence problem. Liang et al. [11] used a novel learning strategy whereby all other particles’ historical best information is used to update a particle’s velocity, named as a comprehensive learning particle swarm optimizer. In this variant, other particles’ previous best positions are examples to be learned from by any particle. This novel learning strategy can increase the population diversity and effectively relieve the premature convergence problem. Adaptive PSO had been proposed by Zhan and Zhang [20]. According to the identified state, adaptive control strategies are developed for the inertia weight and acceleration coefficients. Using those adaptive control strategies, this new variant can enhance the exploration and exploitation abilities. According to different states, parameter settings are different, which is good to mitigate the problem of premature convergence. Wang et al. [18] presented PSO with adaptive mutation. This new variant enhances population diversity by employing an adaptive mutation strategy. Therefore, this variant can find a better position and mitigate premature convergence to some extent. To mitigate the problem of premature convergence and improve search accuracy, this paper presents PSO with enhanced global search and local search (EGLPSO).

The rest of the paper is organized as follows: Section 2 introduces the basic PSO algorithm. Section 3 presents EGLPSO. Section 4 presents the test functions, the parameter setting for each algorithm, the results in 30 and 50 dimensions, comparison with other well-known PSO variants, and discussion. Finally, conclusions are given in Section 5.

2 PSO

In PSO, particle swarm would apply global search and local search to find a better solution. The process of PSO is very simple. Firstly, the algorithm randomly generates a certain amount of particles in a D-dimension search space. Moreover, each particle’s velocity is randomly generated at the same time. Then, each particle’s position is updated by its own best previous position and the best position of the global at present. The position of the ith particle in the D-dimensional search space is represented as Xi=[Xi1, Xi2, …, XiD], and the velocity is represented as Vi=[Vi1, Vi2, …, ViD]. The best previous position of ith particle is represented as Pi=[Pi1, Pi2, …, PiD], and the best position of the population is represented as Pg=[Pg1, Pg2, …, PgD].

The velocity Vid (t+1) and position Xid (t+1) of the ith particle in the D-dimensional space at (t+1)th iteration are updated according to Eqs. (1) and (2), respectively:

(1)Vid(t+1)=Vid(t)+c1×r1×(PidXid)+c2×r2×(PgdXid), (1)
(2)Xid(t+1)=Xid(t)+Vid(t+1), (2)

where i is the particle index; c1 and c2 are called cognitive and social acceleration coefficients, respectively; r1 and r2 are two random numbers in the interval [0,1]; and t=1, 2, …, tmax indicates the iteration.

Shi and Eberhart [15] used the inertial weight to improve the performance of a particle swarm optimizer, and the new equations are as follows:

(3)Vid(t+1)=ω×Vid(t)+c1×r3×(PidXid)+c2×r4×(PgdXid), (3)
(4)Xid(t+1)=Xid(t)+Vid(t+1), (4)

where ω represents inertia weight, and it will decrease with iteration. If ω is large, the ability of global search is strong; otherwise, the ability of local search is strong.

3 EGLPSO

In traditional PSO, if a particle discovers the local optima, other particles will quickly move to the local optima. Then all particles will likely get trapped in the local optima. To mitigate the problem of premature convergence and improve search accuracy, this paper first proposes EGLPSO.

In EGLPSO, the particle swarm will be divided into two groups at the beginning. One group will be used for global search, and the other will be used for local search. EGLPSO will enhance the ability of global search at the initial stage, so the particles of global search will be more than those of local search. Along with the iterations, EGLPSO requires a brilliant ability of local search, naturally, so the particle of local search will be linearly increased. In order to find a better position, the particles of global search update their position by a forward or backward strategy for each dimension [4]. Moreover, those particles do not have the velocity vector. The best particle (the smallest fitness value of global search), Pg1, of global search would exchange the information with the best particle (the smallest fitness value of local search), Pg2, and the worst particle (the largest fitness value of local search), Pw, of local search. If the fitness of Pg1 is smaller than Pw, then Pw would replace the position of Pg1. Then Pg1 would use a unique strategy to update its position. Pg1 would get the information of Pg2 in some random dimension. To improve the accuracy of the solution, the mutation strategy is applied to Pg2. Furthermore, the position of Pg2 would be changed in a small range.

Along with the iterations, EGLPSO will decrease the number of particles for global search according to Eq. (5):

(5)n1=around(b×ttmax). (5)

The number of particles for local search would be increased according to Eq. (6):

(6)n2=nn1, (6)

where n1 represents the particles number of global search at present; a represents the initial number of particles for global search; round is an integral function; b represents the total decreased number of particles for global search; n2 represents the number of particles for local search at present; and n represents the number of particle swarms.

In order to enhance the ability of global search, those particles will use a new strategy to update their position. Their positions are changed according to Eq. (7):

(7)Xgd(t+1)=Xgd(t)+c3×rand×(PS(d,2)PS(d,1)), (7)

where Xg represents the particles of global search; c3 represents the scope adjustment coefficient; PS(d,2) and PS(d,1) represent the upper and lower bounds of particles in d dimension, respectively; and rand is a random number in the interval [0,1].

If the fitness of Xg(t+1) is larger than Xg(t), Xg(t+1) will go back to its original position (Xg(t)). Then, Xg will find a better position based on Eq. (8):

(8)Xgd(t+1)=Xgd(t)c3×rand×(PS(d,2)PS(d,1)). (8)

If the fitness of Xg(t+1) is also larger than Xg(t), Xg(t) will go back to its original position.

To improve the search accuracy, a new mutation strategy is applied to local search. The position of Pg2 would be changed according to Eq. (9)

(9)Pg2d(t+1)=Pg2d(t)+c4×(rand1rand2)×(1ttmax), (9)

where c4 represents the small scope adjustment coefficient. This coefficient is mainly used to control the variation range of particles. If this coefficient is small, the best particle of local search would change its position in a small scope and find a better position easily. Once the particles of global search seek out the scope of the global optimum, this coefficient would play a very important role in developing the solution accuracy.

If the fitness of Pg1 is smaller than Pw, then Pw would replace the position of Pg1. Then Pg1 would update its position according to Figure 1.

Figure 1: Pseudocode for Pg1 to Update Itself.
Figure 1:

Pseudocode for Pg1 to Update Itself.

The procedure of the EGLPSO is described as follows:

  • Step 1: Initialize the particle swarm. The position and velocity of the particle is generated randomly in the search space.

  • Step 2: Change the number of particles for global search using Eq. (5), and update the number of particles for local search using Eq. (6).

  • Step 3: Update the particle’s position for global search, using Eq. (7) or (8).

  • Step 4: Update the particle’s position and velocity for local search, using Eqs. (3) and (4).

  • Step 5: Update the best particle’s (Pg2) position, using Eq. (9).

  • Step 6: If f(Pg1) is smaller than f(Pw), then Pw will take the position of Pg1.

  • Step 7: Update the position of Pg1, using Figure 1.

  • Step 8: Go to step 2, until the stopping criterion is met.

The complete flowchart of EGLPSO is shown in Figure 2.

Figure 2: Flowchart of EGLPSO.
Figure 2:

Flowchart of EGLPSO.

4 Experimental

4.1 Test Functions

To verify the performance of EGLPSO, a set of 12 benchmark functions [2, 16, 19] are used in the experiments. These benchmark functions are widely adopted in many kinds of optimization algorithms. In this paper, all functions would be tested in 30 and 50 dimensions. The minimum of all functions is zero. Furthermore, the 12 benchmark functions are listed in Table 1.

Table 1:

Test Functions.

No.Function nameFormulationx*RG
f1Spheref1(x)=i=1Dxi2[0]D[–5.12, 5.12]D
f2Rastriginf2(x)=10D+i=1D(xi210cos(2πxi))[0]D[–5.12, 5.12]D
f3Non-continuous Rastriginf3(x)=10D+i=1D(yi210cos(2πyi))yi={xi|x|i<0.50.5×round(2xi)|xi|>=0.5[0]D[–5.12, 5.12]D
f4Griewankf4(x)=i=1Dxi24000i=1Dcos(xii)+1[0]D[–600, 600]D
f5Weierstrassf5(x)=i=1D(k=0kmax[akcos(2πbk(xi+0.5))])Dk=0kmax[akcos(πbk)][0]D[–0.5, 0.5]D
f6Rosenbrockf6(x)=i=1D1[100(xi2xi+1)2+(xi1)2][1]D[–5, 10]D
f7Ackleyf7(x)=20+e20exp(0.2(1Di=1Dxi2))exp(1Di=1Dcos(2πxi))[0]D[–32, 32]D
f8Schwefelf8(x)=418.9829×Di=1Dxisin(|xi|)[420.96]D[–500, 500]D
f9Schwefel 2.22f9(x)=i=1D|xi|+i=1D|xi|[0]D[–10, 10]D
f10Dixon & Pricef10(x)=(x11)2+i=2Di(2xi22xi1)2[1,0D–1][–10, 10]D
f11Zakharovf11(x)=i=1Dxi2+(i=1D0.5ixi)2+(i=1D0.5ixi)4[0]D[–5, 10]D
f12Levyf12(x)=sin(πxi)+i=1D1(yi1)2[1+10sin2(πyi+1)]+(yi1)2[1+sin2(2πyi)]yi=1+xi14[1]D[–15, 30]D

4.2 Parameter Settings

This paper presented a PSO variant called EGLPSO. In order to verify the effectiveness of the new algorithm, firstly, EGLPSO is compared with standard PSO (SPSO), PSO with constriction factor (CFPSO), linear descend weight PSO (LDWPSO), and enhanced leader PSO (ELPSO). For a fair comparison among all the PSO variants, their population size are set to 40 and maximum number of iterations are set to 500 in the dimensions of 30 and 50 [7]. Specific parameter settings are listed in Table 2.

Table 2:

Parameter Settings of the Involved PSO Algorithms.

AlgorithmParameter settingReferencesAlgorithmParameter settingReferences
SPSOω=0.729; c1=c2=2[1]OLPSO-Gω: 0.9–0.4; c1=c2=2; G=5; Vmax=0.2 × range[21]
CFPSOk=0.729; c1=c2=2[2]FIPSOx=0.729; ∑ci=4.1[12]
LDWPSOω: 0.9–0.4; c1=c2=2[4]CLPSOω: 0.9–0.4; c=2; m=7[11]
ELPSOω: 0.9–0.4; c1=c2=2[5]APSOω: 0.9–0.4; c1, c2∈[1.5 2.5] & c1+c2∈[34][20]
CPSO-Hc1=c2=1.49[6]AMPSOω=0.7298; c1=c2=1.49618; m=10; r=0.05[18]
EGLPSOω=0.4; c1=c2=2; a=b=30; c3=0.2; c4=0.5 and for f5, c4=0.3

4.3 Results for Problems with 30 Dimensions

In this section, EGLPSO is compared with SPSO, CFPSO, LDWPSO, and ELPSO. CFPSO and LDWPSO are the most common variants of SPSO. In ELPSO, a five-staged mutation strategy is applied to the leader particle at each iteration. In EGLPSO, a new mutation strategy is also used to the leader particle of local search, but only one staged. To guarantee the reliability of the experimental results, all algorithms are run 30 times, independently. Table 3 reveals the means and the standard deviations (SDs) produced by the involved algorithms in 12 tested problems. The best results are shown in boldface for each problem. Figure 3 shows the convergence characteristics of each algorithm on 12 benchmark functions. If the search accuracy is <1.0000e–16, MATLAB considers the accuracy as zero.

Table 3:

The Optimization Results (D=30, mean±SD).

FunctionSPSOCFPSOLDWPSOELPSOEGLPSO
f12.31e+0 ±1.47e–16.34e–2±9.45e–21.54e–1±7.41e+05.40e–2±1.34e–21.77e–95±4.15e–96
f28.54e+1±1.21e+11.23e+2±6.44e+15.23e+1±4.12e+12.34e+1±1.15e+10e+0±0e+0
f38.01e+1±4.23e+19.88e+1±3.09e+11.34e+2±5.75e+11.05e+2±3.07e+10e+0±0e+0
f44.14e+0±1.00e+01.04e+0±3.14e–18.88e–1±3.10e+12.31e+0±3.11e+10e+0±0e+0
f59.43e+0±1.77e+01.54e+1±1.11e+07.87e+0±1.47e+00e+0±0e+00e+0±0e+0
f64.34e+3±5.23e+26.21e+4±3.41e+45.21e+4±3.24e+41.45e+4±3.45e+41.41e3±7.52e3
f78.42e+0±1.30e+03.94e+0±4.21e–14.84e+0±1.36e+01.94e+0±4.30e+08.43e16±1.44e16
f85.22e+3±7.87e+25.65e+3±7.88e+36.65e+3±4.17e+26.61e+3±8.62e+24.14e4±1.75e3
f91.31e+1±6.72e+01.28e+1±1.11e+12.04e+1±1.42e+11.08e+1±6.68e+02.18e42±4.34e41
f101.69e+2±1.30e+24.64e+0±2.71e+05.97e+2±5.02e+11.45e+0±9.64e–15.37e1±2.85e1
f111.61e+2±9.33e+15.92e+2±2.79e+22.84e+2±1.80e+21.82e+2±8.91e+11.17e17±2.52e17
f126.25e+1±2.23e+11.08e+2±7.02e+18.23e+1±3.63e+11.02e+2±3.49e+12.54e4±2.83e6
Figure 3: The Convergence Characteristics on Selected Functions (D=30).(A) Sphere (f1); (B) Rastrigin (f2); (C) non-continuous Rastrigin (f3); (D) Griewank (f4); (E) Weierstrass (f5); (F) Rosenbrock (f6); (G) Ackley (f7); (H) Schwefel (f8); (I) Schwefel2 (f9); (J) Dixon & Price (f10); (K) Zakharov (f11); and (L) Levy (f12).
Figure 3:

The Convergence Characteristics on Selected Functions (D=30).

(A) Sphere (f1); (B) Rastrigin (f2); (C) non-continuous Rastrigin (f3); (D) Griewank (f4); (E) Weierstrass (f5); (F) Rosenbrock (f6); (G) Ackley (f7); (H) Schwefel (f8); (I) Schwefel2 (f9); (J) Dixon & Price (f10); (K) Zakharov (f11); and (L) Levy (f12).

From Table 3, it can be observed that EGLPSO is significantly better than the other four variants of PSO on almost all the benchmark functions except for function f5. Although EGLPSO and ELPSO achieved the same accuracy on f5, the convergence speed of EGLPSO is slower than that of ELPSO. For functions f2–f5, EGLPSO can quickly find the global optimum. The results show that EGLPSO has an outstanding ability of local search and global search. From Figure 3, it can be observed that the convergence speed of EGLPSO is quicker than that of the other four PSO algorithms on f2–f4. For functions f6 and f8, other algorithms provide very bad results, and they are trapped in the local optima. However, EGLPSO can help the particle swarm jump out of this predicament and find a better solution. For function f7, other algorithms can seek out a relatively smaller value, but they cannot continue to optimize the problem. However, for EGLPSO, a proper mutation strategy is applied to local search, which can greatly improve the search accuracy. Thus, EGLPSO acquires better solution than the other algorithms. For function f10, the search accuracy of EGLPSO is slightly better than that of the other algorithms. However, EGLPSO also falls into local optima on f10. EGLPSO can greatly improve the performance of PSO in terms of search accuracy and the convergence speed on f1, f9, f11, and f12.

4.4 Comparison with Other Well-known PSO Variants (D=30)

In this section, we would compare the performance of EGLPSO with six well-known PSO variants on f1–f8. The involved algorithms and parameter settings are listed in Table 2. These algorithms include a cooperative approach to PSO (CPSO-H), orthogonal learning PSO (OLPSO-G), the fully informed particle swarm (FIPS), comprehensive learning particle swarm optimizer for global optimization of multimodal functions (CLPSO), adaptive PSO (APSO), and PSO with adaptive mutation for multimodal optimization (AMPSO).

Inspired by CPSO-H, a new multiple swarm strategy is applied to EGLPSO. In OLPSO-G, FIPSO, and CLPSO, particles can learn from other particle’s best information. Inspired by this, the worst particle of local search will learn from the best particle of global search in EGLPSO. Different from APSO, the inertia weight and acceleration coefficients are fixed in EGLPSO. In AMPSO, there are many mutation strategies to choose from. However, there is only one mutation strategy in EGLPSO, which is simpler to understand.

The mean of the final solutions are shown in Table 4. The data in Table 4 are derived from Refs. [11, 18, 21]. In those references, the population size is set to 40, which is similar to EGLPSO. However, their maximum number of iterations is >10,000, which is much larger than that of EGLPSO. The best results are shown in boldface.

Table 4:

The optimization results (D=30, Mean).

FunctionCPSO-HOLPSO-GFIPSOCLPSOAPSOAMPSOEGLPSO
f11.16e1134.12e–542.69e–124.46e–141.54e–15.40e–21.77e–95
f20e+01.07e+02.12e+01.50e–46.27e+01.86e+00e+0
f31.00e–12.34e–54.35e+01.93e–32.27e+01.24e–110e+0
f43.63e–24.83e–31.31e–14.37e–91.20e–20e+00e+0
f54.74e–153.75e–82.02e–35.62e–74.77e–20e+00e+0
f61.37e+12.15e+12.78e+02.08e+11.83e+11.76e+11.41e3
f72.25e–147.98e–153.75e–151.85e–71.09e–147.63e–158.43e16
f82.65e+31.27e+37.41e+13.88e43.74e+21.71e+34.14e–4

From Table 4, we can see that EGLPSO obtains the best results on six functions, while CLPSO only achieves the best results on one function. CPSO-H and AMPSO achieve the best results on two functions, respectively. The other six algorithms fall into the local optimum on f6. This function has a narrow valley from the perceived local optima to the global optimum, and it is not easy to find the global optimum. The result of EGLPSO achieved is far better than the other six algorithms on f6.

4.5 Results for Problems with 50 Dimensions

In this section, five algorithms are tested on 12 benchmark functions in 50 dimensions. Furthermore, the parameter settings are the same as 30 dimensions. Table 5 presents the means and SDs. The best results are shown in boldface. Because the convergence characteristics of 50 dimensions are similar to those of 30 dimensions, they are not displayed. From Table 5, it is easy to find that EGLPSO surpasses all other algorithms on all functions except f5. For f5, EGLPSO acquires the same accuracy as ELPSO.

Table 5:

The optimization results (D=50, mean±SD).

FunctionSPSOCFPSOLDWPSOELPSOEGLPSO
f14.14e+0 ±1.99e+01.11e–1±1.26e–11.21e+1±1.58e+11.09e+1±1.14e+13.14e83±1.23e84
f22.54e+2±2.98e+12.48e+2±9.59e+11.49e+2±6.01e+11.29e+2±2.54e+10e+0±0e+0
f31.88e+2±3.98e+12.88e+1±8.40e+11.08e+2±2.91e+19.49e+1±5.46e+10e+0±0e+0
f45.85e+1±5.38e+11.47e+0±3.23e–12.43e+1±4.07e+11.29e+1±1.58e+10e+0±0e+0
f51.27e+1±4.40e+02.82e+1±2.09e+09.89e+0±1.69e+00e+0±0e+00e+0±0e+0
f63.75e+4±4.96e+41.94e+5±9.52e+41.13e+5±8.91e+41.75e+4±2.83e+41.74e+1±2.14e+1
f71.36e+1±3.36e+05.80e+0±5.66e+06.27e+0±3.47e+01.15e+1±3.99e+07.43e15±1.21e–16
f89.80e+3±2.09e+31.19e+4±8.75e+29.72e+3±3.95e+29.65e+3±6.80e+26.47e–4±1.03e–5
f92.71e+1±2.38e+12.23e+1±1.46e+12.49e+1±8.22e+03.90e+1±1.79e+13.43e–40±1.24e–41
f105.13e+3±3.53e+34.88e+1±5.76e+19.85e+2±1.14e+38.40e+0±1.26e+15.64e–1±2.21e–1
f111.05e+3±3.42e+21.22e+3±3.37e+27.76e+2±4.08e+24.72e+2±5.76e+26.08e–6±1.36e–5
f122.81e+2±1.14e+22.42e+2±8.92e+12.99e+2±1.36e+22.15e+2±5.55e+12.77e–4±3.93e–4

4.6 Discussion

The experimental results prove that the proposed EGLPSO significantly improves the performance of standard PSO in terms of search accuracy and convergence speed. As a new updating strategy for global search and a proper mutation strategy for local search are applied into EGLPSO, more positions would be explored. In this way, when the particle swarm gets trapped in the problem of premature convergence, EGLPSO may more easily mitigate the problem and find a better solution. To avoid wasting resources of particles, a new intelligent method is used in EGLPSO. The number of particles for global search is decreased with the iteration. On the contrary, the number of particles for local search would be increased with the iteration. In order to quickly narrow the scope of the global optimum, EGLPSO needs more particles for global search. When the small scope of the global optimum is found, EGLPSO needs more particles for local search to improve the searching accuracy. The new algorithm uses a unique information exchange strategy to increase the diversity of the population. Comparisons show that EGLPSO gives better performance than other algorithms on most functions.

From the results, we observe that for the unimodal problems (f1, f6, f9), EGLPSO does not perform the best. Because a lot of potential space would be searched, EGLPSO could not converge as fast as the original PSO. EGLPSO obtains better performance on multimodal problems (the other functions). Therefore, EGLPSO is good at solving multimodal problems.

5 Conclusions

In this paper, we present PSO with enhanced global search and local search. In EGLPSO, most of the particles would be concentrated in global search at the beginning. By using the new updating strategy for global search, the particle swarm can effectively jump out of the local optimum. The experimental results prove this in 30 and 50 dimensions. According to a proper strategy, the best particle of global search would exchange information with the worst and the best particle of local search in EGLPSO. To improve the search accuracy, a proper partial mutation strategy is applied to the leader particle of local search. The experimental results prove that the mutation strategy can greatly improve the search accuracy. By comparison with other six well-known PSO variants, it is demonstrated that EGLPSO has a better performance in terms of search accuracy, convergence speed, and global optimality on most functions.

EGLPSO acquires excellent performance at the cost of using more running time. Because more new positions will be tested at each iteration, EGLPSO would take much longer time to find the global optima than SPSO under the same condition. By using a better strategy for global search, the problem may be solved.

Finally, there is still room for studying PSO in future research. Using a different strategy to update different particles’ position may result in better results and decrease the population size. Devising diversified learning strategies may lead to superior results. Devising new mutation strategies for some proper particles may lead to more promising results.

Bibliography

[1] M. Clerc and J. Kennedy, The particle swarm-explosion, stability, and convergence in a multidimensional complex space, IEEE Trans. Evol. Comput.6 (2002), 58–73.10.1109/4235.985692Search in Google Scholar

[2] R. C. Eberhart and Y. H. Shi, Particle swarm optimization: developments, applications and resources, in: Congress on Evolutionary Computation (CEC 2001), pp. 81–86, Seoul, South Korea, 2001.Search in Google Scholar

[3] A. R. Jordehi, Particle swarm optimisation for dynamic optimisation problems: a review, Neural Comput. Appl. 25 (2014), 1507–1516.10.1007/s00521-014-1661-6Search in Google Scholar

[4] A. R. Jordehi, A review on constraint handling strategies in particle swarm optimisation, Neural Comput. Appl. 26 (2015), 1265–1275.10.1007/s00521-014-1808-5Search in Google Scholar

[5] A. R. Jordehi, Enhanced leader PSO (ELPSO): a new PSO variant for solving global optimisation problems, Appl. Soft Comput.26 (2015), 401–417.10.1016/j.asoc.2014.10.026Search in Google Scholar

[6] A. R. Jordehi, Particle swarm optimisation (PSO) for allocation of FACTS devices in electric transmission systems: a review, Renew. Sustain. Energy Rev.52 (2015), 1260–1267.10.1016/j.rser.2015.08.007Search in Google Scholar

[7] A. R. Jordehi and J. Jasni, Parameter selection in particle swarm optimisation: a survey, J. Exp. Theor. Artif. Intell. 25 (2013), 527–542.10.1080/0952813X.2013.782348Search in Google Scholar

[8] A. R. Jordehi and J. Jasni, Particle swarm optimisation for discrete optimisation problems: a review, Artif. Intell. Rev. 43 (2015), 243–258.10.1007/s10462-012-9373-8Search in Google Scholar

[9] A. R. Jordehi, J. Jasni, N. Abd Wahab, M. Z. Kadir and M. S. Javadi, Enhanced leader PSO (ELPSO): a new algorithm for allocating distributed TCSC’s in power systems, Int. J. Elect. Power Energy Syst. 64 (2015), 771–784.10.1016/j.ijepes.2014.07.058Search in Google Scholar

[10] J. Kennedy and R. C. Eberhart, Particle swarm optimization, in: Proceedings of the 1995 IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, 1995.Search in Google Scholar

[11] J. J. Liang, A. K. Qin, P. N. Suganthan and S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Trans. Evol. Comput. 10 (2006), 281–295.10.1109/TEVC.2005.857610Search in Google Scholar

[12] R. Mendes, J. Kennedy and J. Neves, The fully informed particle swarm: simpler, maybe better, IEEE Trans. Evol. Comput. 8 (2004), 204–210.10.1109/TEVC.2004.826074Search in Google Scholar

[13] B. Nakisa, M. Z. A. Nazri, M. N. Rastgoo and S. Abdullah, A survey: particle swarm optimization based algorithms to solve premature convergence problem, J. Comput. Sci. 10 (2014), 1758–1765.10.3844/jcssp.2014.1758.1765Search in Google Scholar

[14] S. Nesmachnow, An overview of metaheuristics: accurate and efficient methods for optimisation, Int. J. Metaheuristics3 (2014), 320–347.10.1504/IJMHEUR.2014.068914Search in Google Scholar

[15] Y. Shi and R. C. Eberhart, A modified particle swarm optimizer, in: Proceedings of the 1998 IEEE International Conference on Evolutionary Computation, pp. 69–73, Piscataway, USA, 1998.Search in Google Scholar

[16] Y. Shi and R. C. Eberhart, Comparing inertia weights and constriction factors in particle swarm optimization, in: Proceedings of the 2000 Congress on Evolutionary Computation, pp. 84–88, Piscataway, NJ, 2000.Search in Google Scholar

[17] F. van den Bergh and A. P. Engelbrecht, A cooperative approach to particle swarm optimization, IEEE Trans. Evol. Comput. 8 (2004), 225–239.10.1109/TEVC.2004.826069Search in Google Scholar

[18] H. Wang, W. Wang and Z. Wu, Particle swarm optimization with adaptive mutation for multimodal optimization, Appl. Math. Comput. 221 (2013), 296–305.10.1016/j.amc.2013.06.074Search in Google Scholar

[19] Y. Xin, L. Guangming and L. Yong, Evolutionary programming made faster, IEEE Trans. Evol. Comput. EI SCI, 3 (1999), 82–102.10.1109/4235.771163Search in Google Scholar

[20] Z. Zhan, J. Zhang, Y. Li and H. Chung, Adaptive particle swarm optimization, IEEE Trans. Syst. 39 (2009), 1362–1381.10.1007/978-3-540-87527-7_21Search in Google Scholar

[21] Z.-H. Zhan, J. Zhang and Y. Li, Orthogonal learning particle swarm optimization, IEEE Trans. Evol. Comput.15 (2011), 832–847.10.1145/1569901.1570147Search in Google Scholar

Received: 2015-12-1
Published Online: 2016-4-21
Published in Print: 2017-7-26

©2017 Walter de Gruyter GmbH, Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 24.4.2024 from https://www.degruyter.com/document/doi/10.1515/jisys-2015-0153/html
Scroll to top button