Elsevier

Information Sciences

Volume 579, November 2021, Pages 231-250
Information Sciences

A novel hybrid particle swarm optimization using adaptive strategy

https://doi.org/10.1016/j.ins.2021.07.093Get rights and content

Abstract

Particle swarm optimization (PSO) has been employed to solve numerous real-world problems because of its strong optimization ability and easy implementation. However, PSO still has some shortcomings in solving complicated optimization problems, such as premature convergence and poor balance between global exploration and local exploitation. A novel hybrid particle swarm optimization using adaptive strategy (ASPSO) is developed to address associated difficulties. The contribution of ASPSO is threefold: (1) a chaotic map and an adaptive position updating strategy to balance exploration behavior and exploitation nature in the search progress; (2) elite and dimensional learning strategies to enhance the diversity of the population effectively; (3) a competitive substitution mechanism to improve the accuracy of solutions. Based on various functions from CEC 2017, the numerical experiment results demonstrate that ASPSO is significantly better than the other 16 optimization algorithms. Furthermore, we apply ASPSO to a typical industrial problem, the optimization of melt spinning progress, where the results indicate that ASPSO performs better than other algorithms.

Introduction

All kinds of real-world problems that exist in engineering, social and physical sciences often can be transformed into optimization problems [1]. With the increasing complexity of actual optimization problems, it is too difficult to use traditional optimization techniques to solve [2]. Therefore, the optimization methods have attracted many researchers' interest in the past few years, especially meta-heuristic optimization ones, for example, particle swarm optimization (PSO) [3], Grey Wolf Optimizer (GWO) [4], and artificial bee colony (ABC) algorithm [5]. Many tasks, such as feature selection [6] and data clustering [7], use these optimization algorithms. PSO is preferred and the most popular among these algorithms due to its strong optimization ability and simplicity in implementation [8].

As an efficient and intelligent optimization algorithm, PSO has received wide attention in the research field. PSO and its variants provide solutions close to the optimum, and the performances have been verified in data clustering [7] and various types of real-world problems [9]. However, PSO poses great challenges because of easily falling into the local optimum and premature convergence, especially over multimodal fitness landscapes. To end this, substantial amounts of modified versions of PSO were proposed [10], [11], [12], [13], [14], [15], [16], which can be roughly divided into four categories [17].

  • Parameter setting. The proper parameters such as inertia weight ω and two acceleration coefficients c1 and c2 have significant effects on the convergence of the solution progress. Concerning inertia weight, several modified inertia weights, such as random, linearly decreasing [18], chaotic dynamic weight [16], and nonlinear time-varying [19], have been used to speed up the convergence rate of PSO. They stated that nonlinear time-varying and chaotic dynamic weight usually have better performance. As for two acceleration coefficients, time-varying acceleration coefficients were adopted to control the local search efficiently [20].

  • Neighborhood topology. Neighborhood topology controls exploration and exploitation according to information-sharing mechanisms. Researchers have devised different neighborhood topologies that include wheel, ring [21], and Von Neumann topology. Mendes and Kennedy [22] introduced a fully informed PSO (FIPSO), which entirely used the information of the personal best positions of all topological neighbors to guide the movement of particles. Parsopoulos and Vrahatis [23] proposed a unified version (UPSO), which cleverly combined global and local PSO to synthesize their exploration and exploitation capabilities. Instead of using a fixed neighborhood topology, Nasir et al. [24] proposed a dynamic neighbor learning PSO (DNLPSO), which used a few novel strategies to select exemplar particles to update the velocity. Tanweer et al. [15] presented a new dynamic mentoring and self-regulation-based particle swarm optimization (DMeSR-PSO) algorithm using the concept of mentor and mentee.

  • Learning strategy. PSO adopts different learning strategies to control exploration and exploitation that have attracted considerable attention. Liang et al. [10] presented a comprehensive learning PSO (CLPSO). CLPSO incorporated a novel learning strategy into PSO whereby all other particles' personal best information was used to update a given particle's velocity. This strategy preserved the diversity of the population and effectively avoided premature convergence. Some researchers proposed some variants of CLPSO to balance exploration and exploitation [25], [26], [27], [28]. Nandar et al. [25] proposed the heterogeneous comprehensive learning PSO, which divided the swarm population into two subpopulations: exploration and the other to focus on exploitation. Zhang et al. [26] presented an enhanced comprehensive PSO, which used local optima topology to enlarge the particle's search space and increase the convergence speed with a certain probability. Xu et al. [27] proposed a dimensional learning PSO algorithm, in which each particle learned from the personal best experience via a dimensional learning strategy. Wang et al. [28] presented an improved-PSO algorithm, using comprehensive learning and dynamic multi-swarm strategy to construct the exploitation subpopulation exemplar and design the exploration subpopulation exemplar, respectively. Li et al. [29] proposed a multi-population cooperative PSO algorithm, which employed a multidimensional comprehensive learning strategy to improve the accuracy of solutions.

  • Hybrid versions. Hybridizing PSO with other evolutionary algorithms is another focus of researchers. PSO borrowed the ideas from the genetic operators, such as selection, crossover, and mutation [13], [14]. Furthermore, differential evolution [30], sine cosine algorithm [31], and ant colony optimization [32] have been introduced into PSO to solve optimization problems.

The PSO variants mentioned above have been successfully applied to solve the optimization problems in reality. However, with the increasing complexity of actual multimodal and high-dimensional optimization problems, existing algorithms cannot guarantee the great diversity and efficiency of the solutions.

To overcome the above limitations, this paper develops a novel hybrid particle swarm optimization using adaptive strategy named ASPSO. The main contributions are summarized as follows. We introduce the chaotic map to tune inertia weight ω to keep the balance between the exploration behavior and exploitation nature in the search progress. Elite and dimensional learning strategies are designed to replace the personal and global learning strategy, which enhances the diversity of the population and effectively avoids premature convergence. An adaptive position update strategy is used to improve the position quality of the next generation effectively further to balance exploration and exploitation in the search process. Finally, a competitive substitution mechanism is presented to improve the accuracy of ASPSO solutions.

This paper is structured as follows. Section 2 reviews the basic PSO. Section 3 illustrates the detailed process of the proposed ASPSO algorithm. Section 4 presents results and discussions about the proposed approach with other algorithms. In Section 5, we apply ASPSO to the engineering problem of the melt spinning process. Finally, a short conclusion is given in Section 6.

Section snippets

Particle swarm optimization (PSO)

PSO is a swarm intelligent optimization algorithm inspired by bird flocking and fish schooling [3]. In PSO, each particle represents a candidate solution with the velocity and position vectors. When searching in a D-dimensional space, a particle i is reprensed by the position Xid=[xi1,xi2,,xiD] with a velocity Vid=[vi1,vi2,,viD]. The velocity and the position are updated by the following formulas [18]:Vidt+1=ωtVidt+c1r1(pbestidt-Xid(t))+c2r2(gbestdt-Xid(t))Xidt+1=Xidt+Vidt+1ωt=ωmax-ωmax-ω

The proposed ASPSO algorithm

This section illustrates the proposed ASPSO algorithm in detail, as shown in Fig. 1. Inertia weight with chaotic is introduced in Section 3.1. Elite and dimensional learning strategies are described in Section 3.2. Adaptive position update strategy and competitive substitution mechanism are presented in Section 3.3 and Section 3.4, respectively.

Benchmark functions and comparison algorithm

The performance of the proposed ASPSO is tested in the CEC2017 benchmark functions [37]. Among thirty functions, F2 is excluded in this experimentation because it shows unstable behavior, especially in high dimensions. The benchmark functions are divided into four categories: unimodal functions (F1–F3), simple multimodal functions (F4 - F10), hybrid functions (F11–F20), and composition functions (F21–F30), as stated in Table 1.

To validate the performances of the ASPSO, we chose eight

Application in the optimization of melt spinning progress

In this part, we apply the proposed ASPSO algorithm to optimize the melt spinning process, a typical complex real-world optimization problem. Melt spinning is one of the traditional techniques to produce producing methods of polymer fibers. Its principle is to feed high polymer raw materials into a screw extruder, send it to a heating zone by a rotating screw, and then send it to a metering pump after extrusion. Many varieties of synthetic fibers, polyester, cotton, and polypropylene are all

Conclusions and future work

In this research, we integrated four strategies into PSO and obtained the ASPSO algorithm. In ASPSO, to better balance exploration behavior and exploitation nature, a chaotic map and an adaptive position updating strategy are proposed. Meanwhile, elite and dimensional learning strategies are devised to effectively enhance the diversity of the population and avoid premature convergence. Finally, a competitive substitution mechanism is presented to improve the accuracy of ASPSO for complex

CRediT authorship contribution statement

Rui Wang: Conceptualization, Methodology, Software, Data curation, Writing – original draft, Writing - review & editing. Kuangrong Hao: Funding acquisition, Supervision, Writing - review & editing. Lei Chen: Data curation, Investigation. Tong Wang: Data curation, Investigation. Chunli Jiang: Formal analysis.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work was supported in part by the National Key Research and Development Plan from Ministry of Science and Technology (2016YFB0302701), the Fundamental Research Funds for the Central Universities (2232021A-10), National Natural Science Foundation of China (no. 61903078), Natural Science Foundation of Shanghai (19ZR1402300, 20ZR1400400), and Fundamental Research Funds for the Central Universities and Graduate Student Innovation Fund of Donghua University (CUSF-DH-D-2021050). In addition, we

References (50)

  • M.R. Tanweer et al.

    Self regulating particle swarm optimization algorithm

    Inf. Sci.

    (2015)
  • C. Yang et al.

    Low-discrepancy sequence initialized particle swarm optimization algorithm with high-order nonlinear time-varying inertia weight

    Appl. Soft Comput.

    (2015)
  • J. Zou et al.

    A close neighbor mobility method using particle swarm optimizer for solving multimodal optimization problems

    Inf. Sci.

    (2020)
  • M. Nasir et al.

    A dynamic neighborhood learning based particle swarm optimizer for global numerical optimization

    Inf. Sci.

    (2012)
  • N. Lynn et al.

    Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation

    Swarm Evol. Comput.

    (2015)
  • G. Xu et al.

    Particle swarm optimization based on dimensional learning strategy

    Swarm Evol. Comput.

    (2019)
  • S. Wang et al.

    Heterogeneous comprehensive learning and dynamic multi-swarm particle swarm optimizer with two mutation operators

    Inf. Sci.

    (2020)
  • Wei Li et al.

    Multipopulation cooperative particle swarm optimization with a mixed mutation strategy

    Inf. Sci.

    (2020)
  • S. Wang et al.

    Self-adaptive mutation differential evolution algorithm based on particle swarm optimization

    Appl. Soft Comput.

    (2019)
  • Ke Chen et al.

    A hybrid particle swarm optimizer with sine cosine acceleration coefficients

    Inf. Sci.

    (2018)
  • Vinita Jindal et al.

    An improved hybrid ant particle optimization (IHAPO) algorithm for reducing travel time in VANETs

    Appl. Soft Comput.

    (2018)
  • Amir H. Gandomi et al.

    Chaotic bat algorithm

    Journal of Computational Science

    (2014)
  • Ke Chen et al.

    Hybrid particle swarm optimization with spiral-shaped mechanism for feature selection

    Expert Syst. Appl.

    (2019)
  • Seyedali Mirjalili et al.

    The whale optimization algorithm

    Adv. Eng. Softw.

    (2016)
  • Joaquín Derrac et al.

    A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms

    Swarm Evol. Comput.

    (2011)
  • Cited by (44)

    View all citing articles on Scopus
    View full text