Particle swarm optimization with FUSS and RWS for high dimensional functions

https://doi.org/10.1016/j.amc.2008.05.147Get rights and content

Abstract

High dimensional optimization problems play an important role in many complex engineering area. Though many variants of particle swarm optimization (PSO) have been proposed, however, most of them are tested and compared with dimension no larger than 300. Since numerical problem with high-dimension maintains a large linkage and correlation among different variables, and the number of local optimum increases significantly with different dimensions, this paper proposes a novel variant of PSO aiming to provide a balance between exploration and exploitation capability. Firstly, the fitness uniform selection strategy (FUSS) with a weak selection pressure is incorporated into the standard PSO. Secondly, “random walk strategy” (RWS) with four different form, is designed to further enhance the exploration capability to escaping from a local optimum. Finally, the proposed PSO combined with FUSS and RWS is applied to seven famous high dimensional benchmark with the dimension up to 3000. Simulation results demonstrate good performance of the new method in solving high dimensional multi-modal problems when compared with two other variants of the PSO.

Introduction

With the industrial and scientific development, many new optimization problems are needed to be solved. Several of them are complex multi-modal, high dimensional, non-differential problems. Therefore, some new optimization techniques have been designed, such as genetic algorithm [1], ant colony optimization [2], etc. However, due to the large linkage and correlation among different variables, these algorithms are easily trapped into a local optimum and failed to obtain the reasonable solution.

Particle swarm optimization (PSO) [3], [4] is a population-based, self-adaptive search optimization method motivated by the observation of simplified animal social behaviors such as fish schooling, bird flocking, etc. It is becoming very popular due to its simplicity of implementation and ability to quickly converge to a reasonably good solution [5], [6], [7].

In a PSO system, multiple candidate solutions coexist and collaborate simultaneously. Each solution called a “particle”, flies in the problem search space looking for the optimal position to land. A particle, as time passes through its quest, adjusts its position according to its own “experience” as well as the experience of neighboring particles. Tracking and memorizing the best position encountered build particle’s experience. For that reason, PSO possesses a memory (i.e. every particle remembers the best position it reached during the past). PSO system combines local search method (through self experience) with global search methods (through neighboring experience), attempting to balance exploration and exploitation.

A particle status on the search space is characterized by two factors: its position and velocity, which are updated by following equations:vj(t+1)=wvj(t)+c1r1(pj(t)-xj(t))+c2r2(pg(t)-xj(t)),xj(t+1)=xj(t)+vj(t+1),where vj(t) and xj(t) represent the velocity and position vectors of particle j at time t, respectively. pj(t) means the best position vector which particle j had been found, as well as pg(t) denotes the corresponding best position found by the whole swarm. Cognitive coefficient c1 and social coefficient c2 are constants known as acceleration coefficients, and r1 and r2 are two separately generated uniformly distributed random numbers in the range [0,1].

The first part of (1) represents the previous velocity, which provides the necessary momentum for particles to roam across the search space. The second part, known as the “cognitive” component, represents the personal thinking of each particle. The cognitive component encourages the particles to move toward their own best positions found so far. The third part is known as the “social” component, which represents the collaborative effect of the particles, in finding the global optimal solution. The social component always pulls the particles toward the global best particle found so far.

Since particle swarm optimization is a new swarm intelligent technique, many researchers focus their attentions to this new area. One famous improvement is the introduction of the inertia weight [8], similarly with temperature schedule in the simulated annealing algorithm. Empirical results showed the linearly decreased setting of inertia weight can give a better performance, such as from 1.4 to 0 [8], and 0.9 to 0.4 [9], [10]. In 1999, Suganthan [11] proposed a time-varying acceleration coefficients automation strategy in which both c1 and c2 are linearly decreased during the course of run. Simulation results show the fixed acceleration coefficients at 2.0 generate better solutions. Following Suganthan’s method, Venter [12] found that the small cognitive coefficient and large social coefficient could improve the performance significantly. Further, Ratnaweera et al. [13] investigated a time-varying acceleration coefficients. In this automation strategy, the cognitive coefficient is linearly decreased during the course of run, however, the social coefficient is linearly increased inversely.

Hybrid with Kalman filter, Monson designed a new Kalman filter particle swarm optimization algorithm [14]. Similarly, Sun proposed a new quantum particle swarm optimization [15] in 2004. From the convergence point, Cui designed a global convergence algorithm – stochastic particle swarm optimization [16]. There are still many other modified methods, such as fast PSO [17], predicted PSO [18], etc. The details of these algorithms can be found in corresponding references.

The PSO algorithm has been empirically shown to perform well on many optimization problems. However, it may easily get trapped in a local optimum when solving high dimensional multi-modal problems. With respect to the PSO model, several papers have been written on the subject to deal with premature convergence, such as the addition of a queen particle [19], the alternation of the neighborhood topology [20], the introduction of subpopulation and giving the particles a physical extension [21], etc. But their results are always tested on some famous benchmark with a middle dimension less than 300, and the optimization results are not very well for the high dimensional case. In order to improve PSO’s performance on high dimensional multimodal problems, we present a new variant PSO combined with fitness uniform selection strategy (FUSS) and random walk strategy (RWS).

The rest of this paper is organized as follows: in Section 2, we summarize two significant previous developments to the standard PSO methodology. One method was used as the basis for our novel development, whereas the other one was selected as comparative measure of performance of the novel method proposed in this paper. In Section 3, we introduce the extension to PSO proposed in this paper. Experimental settings for the benchmarks and simulation strategies are explained in Section 4 and the results in comparison with the two previous developments are presented in Section 5.

Section snippets

Some previous work

In this paper, unconstrained optimization problems can be formulated as a D-dimensional minimization problem as follows:minf(x),x=[x1,x2,,xD],where D is the number of the parameters to be optimized.

In this section, we summarize two significant previous developments, which serve as both a basis for and performance gauge of the novel strategies introduced in this paper.

Proposed new developments

Although there are numerous variants for the PSO, premature convergence when solving high dimensional problems is still the main deficiency of the PSO. In the standard PSO, each particle learns from its pbest (pj(t) for particle j) and gbest (pg(t)) simultaneously. Restricting the social learning aspect to only the gbest makes the original PSO converges fast. However, because all particles in the swarm learn from the gbest even if the current gbest is far from the global optimum, particles

Benchmarks

Seven of the well-known benchmark are used to evaluate the performance of our new developments introduced in this paper. These benchmarks are widely used in evaluating performance of PSO methods. They are:

Sphere Model:f1(x)=j=1nxj2,where |xj|100.0, andf1(x*)=f1(0,0,,0)=0.0.Rosenbrock Function:f2(x)=j=1n-1100(xj+1-xj2)2+(xj-1)2,where |xj|30.0, andf2(x*)=f2(1,1,,1)=0.0,Quartic Function i.e. Noise:f3(x)=j=1njxj4+rand(0,1),where |xj|1.28, rand(0,1) is a uniformly distributed random number and

Results from benchmark simulations

To make a precise comparison, the simulation is divided to two parts: firstly, we compare the proposed methods MPSO1 to MPSO4 for the total seven benchmarks. After the modified version with the best average performance is chosen, we compare it with SPSO and ARPSO. Finally, the conclusion is given.

Conclusion

This paper proposes a new model incorporated with the fitness uniform selection strategy (FUSS) and random walk strategy (RWS). To make the new method effective, four different random walk strategies are designed and compared. Simulation results show the first proposed technique is a robust optimization search method. There are still some further research topic need to be considered:

  • (1)

    Since the introduction of FUSS can result a complete different performance between MPSO1 and ARPSO, how to design

Acknowledgement

This paper is supported by National Natural Science Foundation of China under Grant No. 60674104.

References (28)

  • J.H. Holland

    Adaptation in Natural and Artificial Systems: An Introductory Analysis with Application to Biology, Control, and Artificial Intelligence

    (1992)
  • M. Dorigo et al.

    Ant colony system: a cooperative learning approach to the traveling salesman problem

    IEEE Transactions on Evolutionary Computation

    (1997)
  • R.C. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: Proceedings of 6th International Symposium...
  • J. Kennedy et al.

    Particle swarm optimization

    Proceedings of IEEE International Conference on Neural Networks

    (1995)
  • H.Y. Shen, X.Q. Peng, J.N. Wang, Z.K. Hu, A mountain clustering based on improved PSO algorithm, Lecture Notes on...
  • R.C. Eberhart, Y. Shi, Extracting rules from fuzzy neural network by particle swarm optimization, in: Proceedings of...
  • Q.Y. Li, Z.P. Shi, J. Shi, Z.Z. Shi, Swarm intelligence clustering algorithm based on attractor, Lecture Notes on...
  • Y. Shi, R.C. Eberhart, A modified particle swarm optimizer, in: Proceedings of the IEEE International Conference on...
  • Y. Shi, R.C. Eberhart, Parameter selection in particle swarm optimization, in: Proceedings of the 7th Annual Conference...
  • Y. Shi et al.

    Empirical study of particle swarm optimization

    in: Proceedings of the Congress on Evolutionary Computation

    (1999)
  • P.N. Suganthan

    Particle swarm optimizer with neighbourhood operator

    in: Proceedings of the Congress on Evolutionary Computation

    (1999)
  • G. Venter, Particle swarm optimization, in: Proceedings of 43rd AIAA/ASME/ASCE/AHS/ASC Structure, Structures Dynamics...
  • A. Ratnaweera et al.

    Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients

    IEEE Transactions on Evolutionary Computation

    (2004)
  • C.K. Monson, K.D. Seppi, The Kalman swarm: a new approach to particle motion in swarm optimization, in: Proceedings of...
  • Cited by (0)

    View full text