A novel hybrid particle swarm optimization using adaptive strategy
Introduction
All kinds of real-world problems that exist in engineering, social and physical sciences often can be transformed into optimization problems [1]. With the increasing complexity of actual optimization problems, it is too difficult to use traditional optimization techniques to solve [2]. Therefore, the optimization methods have attracted many researchers' interest in the past few years, especially meta-heuristic optimization ones, for example, particle swarm optimization (PSO) [3], Grey Wolf Optimizer (GWO) [4], and artificial bee colony (ABC) algorithm [5]. Many tasks, such as feature selection [6] and data clustering [7], use these optimization algorithms. PSO is preferred and the most popular among these algorithms due to its strong optimization ability and simplicity in implementation [8].
As an efficient and intelligent optimization algorithm, PSO has received wide attention in the research field. PSO and its variants provide solutions close to the optimum, and the performances have been verified in data clustering [7] and various types of real-world problems [9]. However, PSO poses great challenges because of easily falling into the local optimum and premature convergence, especially over multimodal fitness landscapes. To end this, substantial amounts of modified versions of PSO were proposed [10], [11], [12], [13], [14], [15], [16], which can be roughly divided into four categories [17].
- •
Parameter setting. The proper parameters such as inertia weight and two acceleration coefficients and have significant effects on the convergence of the solution progress. Concerning inertia weight, several modified inertia weights, such as random, linearly decreasing [18], chaotic dynamic weight [16], and nonlinear time-varying [19], have been used to speed up the convergence rate of PSO. They stated that nonlinear time-varying and chaotic dynamic weight usually have better performance. As for two acceleration coefficients, time-varying acceleration coefficients were adopted to control the local search efficiently [20].
- •
Neighborhood topology. Neighborhood topology controls exploration and exploitation according to information-sharing mechanisms. Researchers have devised different neighborhood topologies that include wheel, ring [21], and Von Neumann topology. Mendes and Kennedy [22] introduced a fully informed PSO (FIPSO), which entirely used the information of the personal best positions of all topological neighbors to guide the movement of particles. Parsopoulos and Vrahatis [23] proposed a unified version (UPSO), which cleverly combined global and local PSO to synthesize their exploration and exploitation capabilities. Instead of using a fixed neighborhood topology, Nasir et al. [24] proposed a dynamic neighbor learning PSO (DNLPSO), which used a few novel strategies to select exemplar particles to update the velocity. Tanweer et al. [15] presented a new dynamic mentoring and self-regulation-based particle swarm optimization (DMeSR-PSO) algorithm using the concept of mentor and mentee.
- •
Learning strategy. PSO adopts different learning strategies to control exploration and exploitation that have attracted considerable attention. Liang et al. [10] presented a comprehensive learning PSO (CLPSO). CLPSO incorporated a novel learning strategy into PSO whereby all other particles' personal best information was used to update a given particle's velocity. This strategy preserved the diversity of the population and effectively avoided premature convergence. Some researchers proposed some variants of CLPSO to balance exploration and exploitation [25], [26], [27], [28]. Nandar et al. [25] proposed the heterogeneous comprehensive learning PSO, which divided the swarm population into two subpopulations: exploration and the other to focus on exploitation. Zhang et al. [26] presented an enhanced comprehensive PSO, which used local optima topology to enlarge the particle's search space and increase the convergence speed with a certain probability. Xu et al. [27] proposed a dimensional learning PSO algorithm, in which each particle learned from the personal best experience via a dimensional learning strategy. Wang et al. [28] presented an improved-PSO algorithm, using comprehensive learning and dynamic multi-swarm strategy to construct the exploitation subpopulation exemplar and design the exploration subpopulation exemplar, respectively. Li et al. [29] proposed a multi-population cooperative PSO algorithm, which employed a multidimensional comprehensive learning strategy to improve the accuracy of solutions.
- •
Hybrid versions. Hybridizing PSO with other evolutionary algorithms is another focus of researchers. PSO borrowed the ideas from the genetic operators, such as selection, crossover, and mutation [13], [14]. Furthermore, differential evolution [30], sine cosine algorithm [31], and ant colony optimization [32] have been introduced into PSO to solve optimization problems.
The PSO variants mentioned above have been successfully applied to solve the optimization problems in reality. However, with the increasing complexity of actual multimodal and high-dimensional optimization problems, existing algorithms cannot guarantee the great diversity and efficiency of the solutions.
To overcome the above limitations, this paper develops a novel hybrid particle swarm optimization using adaptive strategy named ASPSO. The main contributions are summarized as follows. We introduce the chaotic map to tune inertia weight to keep the balance between the exploration behavior and exploitation nature in the search progress. Elite and dimensional learning strategies are designed to replace the personal and global learning strategy, which enhances the diversity of the population and effectively avoids premature convergence. An adaptive position update strategy is used to improve the position quality of the next generation effectively further to balance exploration and exploitation in the search process. Finally, a competitive substitution mechanism is presented to improve the accuracy of ASPSO solutions.
This paper is structured as follows. Section 2 reviews the basic PSO. Section 3 illustrates the detailed process of the proposed ASPSO algorithm. Section 4 presents results and discussions about the proposed approach with other algorithms. In Section 5, we apply ASPSO to the engineering problem of the melt spinning process. Finally, a short conclusion is given in Section 6.
Section snippets
Particle swarm optimization (PSO)
PSO is a swarm intelligent optimization algorithm inspired by bird flocking and fish schooling [3]. In PSO, each particle represents a candidate solution with the velocity and position vectors. When searching in a -dimensional space, a particle is reprensed by the position ] with a velocity . The velocity and the position are updated by the following formulas [18]:
The proposed ASPSO algorithm
This section illustrates the proposed ASPSO algorithm in detail, as shown in Fig. 1. Inertia weight with chaotic is introduced in Section 3.1. Elite and dimensional learning strategies are described in Section 3.2. Adaptive position update strategy and competitive substitution mechanism are presented in Section 3.3 and Section 3.4, respectively.
Benchmark functions and comparison algorithm
The performance of the proposed ASPSO is tested in the CEC2017 benchmark functions [37]. Among thirty functions, F2 is excluded in this experimentation because it shows unstable behavior, especially in high dimensions. The benchmark functions are divided into four categories: unimodal functions (F1–F3), simple multimodal functions (F4 - F10), hybrid functions (F11–F20), and composition functions (F21–F30), as stated in Table 1.
To validate the performances of the ASPSO, we chose eight
Application in the optimization of melt spinning progress
In this part, we apply the proposed ASPSO algorithm to optimize the melt spinning process, a typical complex real-world optimization problem. Melt spinning is one of the traditional techniques to produce producing methods of polymer fibers. Its principle is to feed high polymer raw materials into a screw extruder, send it to a heating zone by a rotating screw, and then send it to a metering pump after extrusion. Many varieties of synthetic fibers, polyester, cotton, and polypropylene are all
Conclusions and future work
In this research, we integrated four strategies into PSO and obtained the ASPSO algorithm. In ASPSO, to better balance exploration behavior and exploitation nature, a chaotic map and an adaptive position updating strategy are proposed. Meanwhile, elite and dimensional learning strategies are devised to effectively enhance the diversity of the population and avoid premature convergence. Finally, a competitive substitution mechanism is presented to improve the accuracy of ASPSO for complex
CRediT authorship contribution statement
Rui Wang: Conceptualization, Methodology, Software, Data curation, Writing – original draft, Writing - review & editing. Kuangrong Hao: Funding acquisition, Supervision, Writing - review & editing. Lei Chen: Data curation, Investigation. Tong Wang: Data curation, Investigation. Chunli Jiang: Formal analysis.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
This work was supported in part by the National Key Research and Development Plan from Ministry of Science and Technology (2016YFB0302701), the Fundamental Research Funds for the Central Universities (2232021A-10), National Natural Science Foundation of China (no. 61903078), Natural Science Foundation of Shanghai (19ZR1402300, 20ZR1400400), and Fundamental Research Funds for the Central Universities and Graduate Student Innovation Fund of Donghua University (CUSF-DH-D-2021050). In addition, we
References (50)
- et al.
A hybrid particle swarm optimization algorithm using adaptive learning strategy
Inf. Sci.
(2018) - et al.
Differential mutation and novel social learning particle swarm optimization algorithm
Inf. Sci.
(2019) - et al.
Grey wolf optimizer
Adv. Eng. Softw.
(2014) - et al.
Density-based particle swarm optimization algorithm for data clustering
Expert Syst. Appl.
(2018) - et al.
A constrained multi-swarm particle swarm optimization without velocity for constrained optimization problems
Expert Syst. Appl.
(2020) - et al.
A hierarchical simple particle swarm optimization with mean dimensional information
Appl. Soft Comput.
(2019) - et al.
Novel chaotic grouping particle swarm optimization with a dynamic regrouping strategy for solving numerical optimization tasks
Knowl.-Based Syst.
(2020) - et al.
Global genetic learning particle swarm optimization with diversity enhancement by ring topology
Swarm Evol. Comput.
(2019) - et al.
Dynamic mentoring and self-regulation based particle swarm optimization algorithm for solving complex real-world optimization problems
Inf. Sci.
(2016) - et al.
Chaotic dynamic weight particle swarm optimization for numerical function optimization
Knowl.-Based Syst.
(2018)
Self regulating particle swarm optimization algorithm
Inf. Sci.
Low-discrepancy sequence initialized particle swarm optimization algorithm with high-order nonlinear time-varying inertia weight
Appl. Soft Comput.
A close neighbor mobility method using particle swarm optimizer for solving multimodal optimization problems
Inf. Sci.
A dynamic neighborhood learning based particle swarm optimizer for global numerical optimization
Inf. Sci.
Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation
Swarm Evol. Comput.
Particle swarm optimization based on dimensional learning strategy
Swarm Evol. Comput.
Heterogeneous comprehensive learning and dynamic multi-swarm particle swarm optimizer with two mutation operators
Inf. Sci.
Multipopulation cooperative particle swarm optimization with a mixed mutation strategy
Inf. Sci.
Self-adaptive mutation differential evolution algorithm based on particle swarm optimization
Appl. Soft Comput.
A hybrid particle swarm optimizer with sine cosine acceleration coefficients
Inf. Sci.
An improved hybrid ant particle optimization (IHAPO) algorithm for reducing travel time in VANETs
Appl. Soft Comput.
Chaotic bat algorithm
Journal of Computational Science
Hybrid particle swarm optimization with spiral-shaped mechanism for feature selection
Expert Syst. Appl.
The whale optimization algorithm
Adv. Eng. Softw.
A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms
Swarm Evol. Comput.
Cited by (44)
Diversity-guided particle swarm optimization with multi-level learning strategy
2024, Swarm and Evolutionary ComputationCompeting leaders grey wolf optimizer and its application for training multi-layer perceptron classifier
2024, Expert Systems with ApplicationsA novel hybrid particle swarm optimization with marine predators
2023, Swarm and Evolutionary ComputationBoosting particle swarm optimization by backtracking search algorithm for optimization problems
2023, Swarm and Evolutionary ComputationMulti-modal multi-objective particle swarm optimization with self-adjusting strategy
2023, Information Sciences