Elsevier

Applied Soft Computing

Volume 28, March 2015, Pages 345-359
Applied Soft Computing

Non-parametric particle swarm optimization for global optimization

https://doi.org/10.1016/j.asoc.2014.12.015Get rights and content

Highlights

  • Proposing an improved PSO scheme called non-parametric particle swarm optimization (NP-PSO).

  • Combining local and global topologies with two quadratic interpolation operations to increase the search ability in NP-PSO.

  • Removing PSO parameters in the proposed method.

  • Having the best performance of NP-PSO in solving various nonlinear functions compared with some well-known PSO algorithms.

Abstract

In recent years, particle swarm optimization (PSO) has extensively applied in various optimization problems because of its simple structure. Although the PSO may find local optima or exhibit slow convergence speed when solving complex multimodal problems. Also, the algorithm requires setting several parameters, and tuning the parameters is a challenging for some optimization problems. To address these issues, an improved PSO scheme is proposed in this study. The algorithm, called non-parametric particle swarm optimization (NP-PSO) enhances the global exploration and the local exploitation in PSO without tuning any algorithmic parameter. NP-PSO combines local and global topologies with two quadratic interpolation operations to increase the search ability. Nineteen (19) unimodal and multimodal nonlinear benchmark functions are selected to compare the performance of NP-PSO with several well-known PSO algorithms. The experimental results showed that the proposed method considerably enhances the efficiency of PSO algorithm in terms of solution accuracy, convergence speed, global optimality, and algorithm reliability.

Introduction

PSO [1] is a population-based algorithm inspired by the social behavior of bird flocking or fish schooling. In the algorithm, a member in the swarm, particle, represents a potential solution which is a point in the search space. The global optimum is regarded as the location of food. Each particle adjusts its flying direction according to the best experiences obtained by itself and the swarm in the solution space. The algorithm has a simple concept and is easy to implement. Hence, it has received much more attention to solve real-world optimization problems [2], [3], [4], [5], [6], [7], nevertheless, PSO may easily get trapped in local optima and shows a slow convergence rate when solving the complex and high dimensional multimodal objective functions [8].

A number of variant PSO algorithms have been proposed in the literature to overcome the problems. The algorithms have improved the performance of PSO in different ways using various types of topologies, selecting parameters, combining with other search techniques and so on.

A local (ring) topological structure PSO (LPSO) [9] and Von Neumann topological structure PSO (VPSO) [10] were proposed by Kennedy and Mendes to avoid trapping into local optima. According to Kennedy [9], [11], PSO with a small neighborhood might have a better performance on complex problems, while PSO with a large neighborhood would perform better on simple problems. Suganthan [12] applied a dynamically adjusted neighborhood where the neighborhood of a particle gradually increases until it includes all particles. Dynamic multi-swarm PSO (DMS-PSO) [13] was suggested by Liang and Suganthan where the neighborhood of a particle gradually increases until it includes all particles. Hu and Eberhart [14] applied a dynamic neighborhood where m nearest particles in the performance space is chosen to be its new neighborhood in each generation. Mendes et al. [15] presented the fully informed particle swarm (FIPS) algorithm that uses the information of entire neighborhood to guide the particles for finding the best solution. Parsopoulos and Vrahatis combined the global and local versions together to form the unified particle swarm optimizer (UPSO) [16]. Gao et al. [17] used PSO with a stochastic search technique and chaotic opposition-based population initialization to solve complex multimodal problems. The algorithm, CSPSO, finds new solutions in the neighborhoods of the previous best positions to escape from local optima.

The fitness-distance-ratio-based PSO (FDR-PSO) was introduced by Peram et al. [18]. In the algorithm, each particle moves toward nearby particle with higher fitness value. Liang et al. [8] developed comprehensive learning particle swarm optimization (CLPSO) that focused on avoiding the local optima by encouraging each particle to learn its behavior from other particles on different dimensions.

In another research, a selection operator was firstly proposed for PSO by Angeline [19]. Other researchers applied apart from crossover [20], and mutation [21] operations from GA into PSO. An adaptive fuzzy particle swarm optimization (AFPSO) [22] proposed to adjust the parameters in PSO based on fuzzy inferences.

Beheshti et al. proposed the median-oriented PSO (MPSO) [23] based on the information from the median particle. Also, they introduced centripetal accelerated PSO (CAPSO) [24] according to Newton's laws of motion to accelerate the learning procedure and convergence rate of optimization problems. Other variant PSO algorithms have been recently developed based on different techniques [25], [26], [27], [28].

Although the aforementioned algorithms have obtained satisfactory results in many optimization problems; there are still some disadvantages. For example, LPSO presents a slow convergence rate in unimodal functions [23], [24]. CLPSO is not a good choice for solving unimodal problems [8]. Also, the majority of the algorithms require several parameters to tune, and setting the parameters can be a challenging for optimization problems. Moreover, some of the algorithms have a better performance than the PSO but their structures are not as simple as PSO.

To overcome the drawbacks, this study introduces a non-parametric particle swarm optimization (NP-PSO) algorithm. The proposed method performs a global and local search over the search space with a fast convergence speed using two quadratic interpolation operations. There is no need to tune any algorithmic parameter in the NP-PSO algorithm. It means that all PSO parameters are removed in the proposed algorithm.

The remainder of this study is organized as follows. In Section 2, a brief overview of PSO is provided. The proposed algorithm, NP-PSO in more details is described in Section 3. In Section 4, NP-PSO is used to solve several unimodal and multimodal benchmark functions and its performance is compared with some PSO algorithms in the literature. Finally, conclusions and further research directions are presented in Section 5.

Section snippets

Particle swarm optimization (PSO)

PSO is a population-based meta-heuristic algorithm that applies two approaches of global exploration and local exploitation to find the optimum solution. The exploration is the ability of expanding search space, where the exploitation is the ability of finding the optima around a good solution. The algorithm is initialized by creating a swarm, i.e., population of particles (N), with random positions. Every particle is shown as a vector, (Xi,Vi,Pbesti), in a D-dimensional search space where X

NP-PSO – The proposed method

NP-PSO tends to overcome the disadvantages of PSO by avoiding local optima, accelerating the convergence speed and removing algorithmic parameters. According to [23], [24], PSO has shown a better performance than LPSO in unimodal problems and LPSO provides a good results in multimodal. Hence, both local and global topologies are applied in NP-PSO. Also, the search of new area is improved by creating new particles in different areas. In the algorithm, each particle uses the best position found

Experimental results and discussion

In this section, the proposed NP-PSO algorithm is compared with six well-known PSO algorithms in the literature. The performance of algorithms is evaluated by various unimodal and multimodal functions in different dimensions. Nineteen (19) benchmark functions [35], [36], [37] are selected in this study.

Conclusions and future research

This study presents a non-parametric PSO algorithm (NP-PSO) to improve the search ability and the convergent efficiency of PSO. The algorithm does not use any algorithmic parameter. It is simple and easy to implement as the original PSO. The method enhances the global exploration and the local exploitation using both the local and global topologies, and two quadratic interpolation operations. The new strategies increase the particles’ abilities to fly in the larger potential space. To evaluate

Acknowledgements

The authors thank the Research Management Center (RMC), Universiti Teknologi Malaysia (UTM) for supporting in R&D, and Soft Computing Research Group (SCRG), Universiti Teknologi Malaysia (UTM), Johor Bahru, Malaysia for the inspiration and moral support in conducting this research. Also, they hereby would like to appreciate the post-doctoral program, Universiti Teknologi Malaysia (UTM), for the financial support and research activities. This work is supported by The Ministry of Higher Education

References (39)

  • C.-C. Chen

    Two-layer particle swarm optimization for unconstrained optimization problems

    Appl. Soft Comput.

    (2011)
  • S.L. Sabat et al.

    Integrated learning particle swarm optimizer for global optimization

    Appl. Soft Comput.

    (2011)
  • R. Salomon

    Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions

    Biosystems

    (1996)
  • J. Kennedy et al.

    Particle swarm optimization

  • Z. Beheshti et al.

    Enhancement of artificial neural network learning using centripetal accelerated particle swarm optimization for medical diseases diagnosis

    Soft Comput.

    (2013)
  • J.J. Liang et al.

    Comprehensive learning particle swarm optimizer for global optimization of multimodal functions

    IEEE Trans. Evol. Comput.

    (2006)
  • J. Kennedy et al.

    Population structure and particle swarm performance

  • J. Kennedy et al.

    Neighborhood topologies in fully informed and best-of-neighborhood particle swarms

    IEEE Trans. Syst. Man Cybernet. Part C

    (2006)
  • J. Kennedy

    Small worlds and mega-minds: effects of neighborhood topology on particle swarm performance

  • Cited by (64)

    • Improved wolf pack algorithm based on adaptive step size and Levy flight strategy

      2023, Chongqing Daxue Xuebao/Journal of Chongqing University
    View all citing articles on Scopus
    View full text