Elsevier

Neurocomputing

Volume 151, Part 3, 3 March 2015, Pages 1237-1247
Neurocomputing

Hybrid learning particle swarm optimizer with genetic disturbance

https://doi.org/10.1016/j.neucom.2014.03.081Get rights and content

Abstract

Particle swarm optimizer (PSO) is a population-based stochastic optimization technique which has already been successfully applied to the engineering and other scientific fields. This paper presents a modification of PSO (hybrid learning PSO with genetic disturbance, HLPSO-GD for short) intended to combat the problem of premature convergence observed in many PSO variants. In HLPSO-GD, the swarm uses a hybrid learning strategy whereby all other particles’ previous best information is adopted to update a particle׳s position. Additionally, to better make use of the excellent particle׳s information, the global external archive is introduced to store the best performing particle in the whole swarm. Furthermore, the genetic disturbance (simulated binary crossover and polynomial mutation) is used to cross the corresponding particle in the external archive, and generate new individuals which will improve the swarm ability to escape from the local optima. Experiments were conducted on a set of traditional multimodal test functions and CEC 2013 benchmark functions. The results demonstrate the good performance of HLPSO-GD in solving multimodal problems when compared with the other PSO variants.

Introduction

Particle swarm optimizer (PSO), originated from the simulation of human and social animal behavior [1], has come to be successfully applied to the engineering and other scientific fields. It has been proven to be a powerful competitor to other evolutionary algorithms, such as genetic algorithms [2]. In the PSO running mechanism, it simulates the social behavior of individuals, and each particle is “evolved” by cooperation and competition among the individuals through generations. In the swarm, the particles evaluate their positions relative to an objective function at each iteration, share the memories of its own flying experiences and the best experience of the swarm, and then use those memories to adjust their own velocities and positions. In the past decade, many researchers proposed different variants of PSO, including parameters improvements, topologies design, hybrid strategies, and so on [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29].

Most stochastic optimization algorithms (like PSO and GA) will suffer from the ‘curse of dimensionality’ which implies that the algoithm performance will deteriorate as the dimensionality of the search space increases. Usually, a basic stochastic global search algorithm can generate a sample for a uniform distribution that covers the entire search space [18]. Given this idea and combined with our previous work in [33], [34], this paper introduces a variant of PSO (hybrid learning PSO with genetic disturbance, HLPSO-GD for short). In HLPSO-GD, a hybrid learning strategy is introduced, in which all other particles׳ historical best information is used to update a particle׳s velocity, and the genetic disturbance (simulated binary crossover and polynomial mutation) is used to cross the corresponding particle in the external archive, and generate a new particle which will improve the swarm ability to escape from the local optima, respectively.

Additionally, in order to increase the information exchange among all particles, the neighborhood topology is not fixed, but dynamically constructed. The used strategies of HLPSO-GD ensure the swarm׳s diversity against the premature convergence, especially for complex multimodal problems. The experimental results demonstrate the proposed HLPSO-GD is able to escape from the local optima to some extend when solving the complex multimodal problems.

The organization of this paper is as follows: in Section 2, an overview of the basic version of PSO and some PSO variants are discussed, the introduction of the proposed HLPSO-GD is in Sections 3, and 4 presents the experiments to be conducted. The experiments׳ results and conclusions are presented in Section 5.

Section snippets

Basic particle swarm optimizer (BPSO)

PSO is a population-based optimization algorithm that starts with an initial population of randomly generated particles. Each particle is endowed with historical memory that enables it to remember the best position it found so far. Each individual is attracted by its own best experience and neighbors׳ best experiences (best found position by the neighbors) as follows:vi¯(t)=wvi¯(t1)+φ1r1(pi¯xi¯(t1))+φ2r2(pg¯xi¯(t1))xi¯(t)=xi¯(t1)+vi¯(t)where, vi¯(t) is the velocity of the ith particle,

Dynamic neighborhood topology and gbest external archive

In the global neighborhood, the information is exchanged faster than that of in the local neighborhood topology. Based on this judgment, a specified neighborhood topology can determine the frequency of diversity loss in the swarm. For example, in the global neighborhood topology, the information can be transferred fast, meanwhile the swarm diversity is also lost rapidly. In this way, the small size topology facilitates the preservation of the swarm diversity for a longer time.

In addition, at

The comparative PSOs and test functions

In order to know how competitive HLPSO-GD was, the experiments were conducted to compare six PSO algorithms that are representative of the state of the art. The six PSOs are: (i) Local version PSO (the neighborhood topology is ring type, where every vertex is connected to two others) with constriction factor (CF-LPSO) [3]; (ii) Global version PSO (The neighborhood topology is all type, where all vertexes are connected to every other) with constriction factor (CF-GPSO) [3]; (iii)

Conclusions and future works

This paper presented an improved PSO with hybrid learning strategy and genetic disturbance (HLPSO-GD). To enhance the information exchange frequency among all particles, the neighborhood topology is dynamically constructed in HLPSO-GD, and the velocity of each particle is updated based on all particles of its neighborhood including itself. By this way, the swarm׳s diversity can be maintained to some extend and thus fight against the premature convergence. To strengthen the swarm ability to

Acknowledgments

This work is partially supported by The National Natural Science Foundation of China (Grants nos. 71461027, 71271140, 71001072, 71210107016), The Hong Kong Scholars Program 2012 (Grant no. G-YZ24), Guizhou Province Science and Technology fund (Grant nos. Qian Ke He J [2012] 2340, Qian Ke He J [2012] 2342, LKZS [2012]10, LKZS [2012]22), China Postdoctoral Science Foundation Funded Project (Grant nos. 2012M520936, 2013T60466, 20100480705, 2012T50584), Shanghai Postdoctoral Science Foundation

Yanmin Liu received the B.S. degree in Applied mathematics from Harbin Institute of Technology, Haerbin, China, in 2001, the M.S. degree in Control science and engineering from Heilongjiang Bayi Agricultural University, Daqing, China, in 2006, and the Ph.D. degree in Decision theory and application from Shandong Normal University, Jinan, China, in 2011. He is presently serving as a professor in College of Mathematical and Computational Science, Zunyi normal College. His main fields of research

References (36)

  • R. Salomon

    Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions

    BioSystems

    (1996)
  • J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceeding of the 1995 IEEE International Conference on...
  • R. Eberhart, Y. Shi, Comparison between genetic algorithms and particle swarm optimization, in: Proceedings of the 7th...
  • M. Clerc et al.

    The particle swarm-explosion, stability, and convergence in a multidimensional complex space

    IEEE Trans. Evol. Comput.

    (2002)
  • P.N. Suganthan, Particle swarm optimizer with neighborhood operator, in: Proceedings of the IEEE Congress on...
  • J. Kennedy, R. Mendes, Population structure and particle swarm performance, in: Proceedings of IEEE Congress on...
  • J.J. Liang et al.

    Comprehensive learning particle swarm optimizer for global optimization of multimodal functions

    IEEE Trans. Evol. Comput.

    (2006)
  • J.L Nai et al.

    Enhanced particle swarm optimizer in corporating a weighted particle

    Neurocomputing

    (2014)
  • R. Mendes et al.

    The fully informed particle swarm: simpler, maybe better

    IEEE Trans. Evol. Comput.

    (2004)
  • T.M. Blackwell, P. Bentley, Don׳t push me! collision-avoiding swarms, in: Proceedings of IEEE Congress on Evolutionary...
  • D. Yi et al.

    An improved PSO-based ANN with simulated annealing technique

    Neurocomputing

    (2005)
  • R. Brits, A. Engelbrecht, F. van den Bergh, A niching particle swarm optimizer, in: Proceedings of the 4th Asia-Pacific...
  • X. Li, Adaptively choosing neighborhood bests using species in a particle swarm optimizer for multimodal function...
  • A. Ratnaweera et al.

    Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients

    IEEE Trans. Evol. Comput.

    (2004)
  • S. Yang et al.

    A clustering particle swarm optimizer for locating and tracking multiple optima in dynamic environments

    IEEE Trans. Evol. Comput.

    (2010)
  • T. Peram, K. Veeramachaneni, Fitness-distance-ratio based particle swarm optimization, in: Proceeding of the IEEE Swarm...
  • A.S. Mohais, R. Mendes, C. Ward, C. Posthoff, Neighborhood re-structuring in particle swarm optimization, in: AI 2005:...
  • S. Janson et al.

    A hierarchical particle swarm optimizer and its adaptive variant

    IEEE Trans. Syst., Man, Cybern., Part B

    (2005)
  • Cited by (0)

    Yanmin Liu received the B.S. degree in Applied mathematics from Harbin Institute of Technology, Haerbin, China, in 2001, the M.S. degree in Control science and engineering from Heilongjiang Bayi Agricultural University, Daqing, China, in 2006, and the Ph.D. degree in Decision theory and application from Shandong Normal University, Jinan, China, in 2011. He is presently serving as a professor in College of Mathematical and Computational Science, Zunyi normal College. His main fields of research are Swarm Intelligence, Bio-inspired Computing, Multi-objective Optimization and their applications on supply chain.

    Ben Niu received the B.S. degree from Hefei Union University, Hefei, China, in 2001, the M.S. degree from Anhui Agriculture University, Hefei, China, in 2004, and the Ph.D. degree from Shenyang Institute of Automation of the Chinese Academy of Sciences, Shenyang, China, in 2008. He is presently serving as an Associate Professor in Department of Management Science, Shenzhen University. Currently He is also a Postdoctoral Fellow at Hefei Institute of Intelligent Machines, CAS and at The Hong Kong Polytechnic University, respectively. His main fields of research are Swarm Intelligence, Bio-inspired Computing, and their applications on Supply Chain Optimization, Business Intelligence, and Portfolio Optimization.

    Yuanfeng Luo received the B.S. degree in Applied mathematics from College of Mathematical and Computational Science, Zunyi normal College. He is presently serving as a lecturer in College of Mathematical and Computational Science, Zunyi normal College. His main fields of research are Swarm Intelligence and their applications on supply chain.

    View full text