Abstract
In the simultaneous optimization of multiple objectives, how to balance convergence promotion and diversity preservation in the evolutionary process is a key and challenging problem. In this research, a hyperplane-assisted multi-objective particle swarm optimization with a twofold proportional assignment strategy (tpahaMOPSO) is suggested to ameliorate the optimization performance of MOPSO. First, the external archive is maintained in combination with hyperplane-based convergence evaluation and shift-based density estimation to retain high-quality candidate solutions. Second, a twofold proportional assignment scheme is designed to search the surrounding region of candidate solutions with better potential to emphasize convergence and diversity, respectively. Third, the domination relationship and convergence difference are combined to select a more reasonable individual historical best and reduce the risk of particle aggregation. Finally, the proposed tpahaMOPSO was compared with ten representative and advanced multi-objective optimization algorithms on 22 widely used test functions with different characteristics. The simulation results present that the developed tpahaMOPSO got the best result in 11 benchmark functions for both IGD and HV criteria. Concurrently, the Friedman test was applied for ranking analysis and the proposed algorithm also obtained excellent statistical analysis results. The promising performance and strong competitiveness of the proposed tpahaMOPSO have been verified by different experimental studies.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Problems that require optimization of multiple objectives (MOPs) commonly exist in scientific research and engineering fields [1,2,3], where a couple of conflicting objectives require to be dealt with synchronously. In general, due to the special feature of MOPs, increasing the performance of one objective may cause the performance degradation of one or even more remaining objectives [4]. Furthermore, it is out of the question to seek out a solution that guarantees each contradictory objective reaches the optimal value at the same time [5], so the result obtained is a set of trade-off solutions, different from one global optimal solution in a single-objective optimization problem (SOP). The set can achieve the trade-off between disparate objectives, which is termed the Pareto optimal solution set (PS); the corresponding objective set of the PS is called the Pareto front (PF) [6].
In resolving MOPs, it is difficult to use orthodox optimization methods like mathematical programming methods. Evolutionary algorithms (EAs) as a classic heuristic technique have been demonstrated to be a productive framework for solving MOPs because of their remarkable performance of searching multiple feasible solutions in one single run owing to the evolution based on population, which has been developed for several decades. Therefore, the research field of multi-objective EAs (MOEAs) has attracted extensive attention from the academic community, and many well-known MOEAs have emerged. For instance, the non-dominated sorting genetic algorithm II (NSGAII) [7], the strength Pareto EA 2 (SPEA2) [8], indicator-based EA (IBEA) [9], MOEA based on decomposition (MOEA/D) [10] and other algorithms [11, 12]. Stimulated by the widespread use of MOEAs, numerous different population-intelligence-based algorithms, including the multi-objective immune algorithm (MOIA) [13], multi-objective differential evolution [14], multi-objective ant colony optimization (MACO) [15], and multi-objective particle swarm optimization algorithm (MOPSO) [16], have been put forward one after another. In addition, some iterative optimization algorithms [17,18,19] based on different perspectives have also emerged. In these swarm intelligence algorithms, MOPSO has become one of the mainstream optimization algorithms.
Since particle swarm optimization (PSO) was first put forward by Kennedy and Eberhart in 1995 [20], because its principle is easy to understand, the operation is simple and speedy convergence in solving SOPs, it is natural for researchers to associate this optimization mechanism when solving MOPs and expect promising performance to be achieved. MOPSO based on the adaptive grid was put forward by Coello et al. [16], which is one of the most typical schemes. The algorithm collects the obtained Pareto optimal solutions during the evolutionary process in an archive, which is maintained by adaptive grid operation when the external archive is full. This study laid the basic framework for abundant subsequent improvement works. Nevertheless, there are two key issues to be tackled when applying the framework to design a qualified algorithm. The first issue to be solved is the maintenance of an external elite archive that stores non-dominated solutions. An appropriate archiving strategy should be designed to retain superior performance solutions. The second is how to select the two best leaders, the global best leader (gbest) and the personal best leader (pbest) for particles. Unlike PSO in SOPs, which can simply locate the optimal solution in the search space. MOPSO requires a specific selection strategy to select leaders from existing solutions in the archive to guide the evolution of the population. In the design of MOPSOs, the resulting solutions converge to true PF and are uniformly distributed, which is a consistent measure of performance, that is, the convergence and diversity of EAs [21]. For the sake of improving the quality of the final PS, scholars have worked out various excellent improvement schemes from different perspectives.
It is a challenge to single out the appropriate gbest from the archive, which is directly related to the exploration of previously unvisited areas and exploitation of promising areas. If the solutions located in the uncrowded areas are selected as gbest, the swarm will have more possibilities to explore the uncrowded areas. On the contrary, when convergence is emphasized to select the gbest, the swarm will exploit the surrounding region more. Lots of different studies have been done to address this challenge. In the study [16], roulette assigns selection probabilities to all candidates in the external archive by calculating probabilities, thus randomly assigning candidates located in less crowded areas to each particle, and the work [22] based on a decomposition strategy randomly selects a gbest from the elite archive for sub-problems of different weights. Figueiredo et al. [23] combine two strategies of extreme value solution and tournament selection to promote coverage of PF. AMOPSO/ESE [24] used different measures from the archive to select a convergence leader and several diverse leaders, to select suitable leaders for different states of the swarm, which increases the rationality of convergence and diversity management. Two different distances are used to evaluate leaders with different characteristics to meet the evolutionary demands of the population [25]. In the work [26], the gbest is chosen by two methods successively: the first is to compare solutions based on dominating differences, and then use density estimation when the first method fails to identify the optimal solution. In addition, ranking is an intuitive method. In the study [27], a global margin ranking scheme is proposed to evaluate the candidates in the archive. Meanwhile, tssAMOPSO [28] takes into account the definition outcomes from two rankings: average ranking (AR) and global detriment (GD). It is intuitive to utilize the ranking information of candidate solutions to choose solutions with better convergence as the leaders. It can be seen that the effect of gbest selection strategy on the performance of MOPSOs has received a lot of attention.
As for the replacement design of personal best, it is relatively simple and straightforward compared to selecting gbest from the archive. In many existing MOPSOs [29, 30], the pbest selection strategy straightforwardly considers the Pareto domination relationship. In other words, when a particle accesses a new location in the search space if this new-found location dominates its historically optimal location, then this new position will become the historical best position for a given particle. Inversely, the historically optimal location will remain unchanged if the new location is dominated by the historically optimal location. Nevertheless, if the relationship is mutually non-dominant, the current pbest will pick one at random between the historical best location and the newly discovered location. In these years, some people have paid attention to the rationality of pbest selection strategy. A method of constructing the dominant difference matrix to compare the new positions and using pbest is proposed in the work [26]. The particle corresponding to the column with the largest sum norm of the absolute column of the dominance difference matrix is taken as pbest, and the design enhances the discriminability between candidates. Li et al. [31] designed a new pbest selection strategy that took into account the angle and distance information between the gbest, particle, and pbest in different iteration stages so that the selection of pbest has a guiding significance. In ecemAMOPSO [32], which constitutes a memory interval by storing the positions of particles in recent iterations, the best solution will be selected as pbest, which increases the exploration capacity of the population. The results illustrate that a certain amount of attention to the replacement mechanism of pbest can promote the exploration capacity of MOPSOs and improve evolutionary performance in a certain way.
As the population evolves iteratively, more and more non-dominated solutions will be found. However, the external archive acts as a repository for holding elite particles and has a predefined, fixed maximum capacity. Therefore, it is necessary to design an archive maintenance strategy to retain or delete the newly discovered non-dominant solutions. Since gbest is commonly selected from the archive, the mass of the existing candidates in the repository will directly affect the performance of the algorithm, and solutions in the archive obtained from the last iteration will be the final output result. For MOPSOs, convergence too fast may appear as a premature phenomenon, which leads to a local optimum. Algorithms suffering from this dilemma will result in extremely poor diversity. Conversely, if the algorithm converges too slowly, it will not obtain a well-distributed approximate PF. To obtain a high-quality archive, some MOPSOs have been widely studied, trying to further control the particle convergence and diversity factors of influence. In this work [26], the external archive was updated by moving the best half of the population in diminishing order into the archive, the dominant difference matrix of the external archive was reconstructed, and the worst half of the solutions were removed. The work improved the search efficiency of the particles. Yang et al. [33] designed a retention mechanism based on a max–min vector angle to trim the overflow archive. The algorithm’s approach can find the two search directions that are most similar, and the difference in candidate solutions in the archive is maximized to promote archive diversity. In the MOPSO/vPF [31], reference vectors are applied to prune the overfull archive and ensure maximum distance between any candidate solutions. The feature of this algorithm is that a virtual PF is built from the archive, so the reference vector can be dynamically updated based on the virtual PF. To guide the evolutionary process, Figueiredo et al. [23] and Wu et al. [24] use the method of reference points, which is after the solutions in the archive are used to generate hyperplanes, the density of the solutions is evaluated, thus preserving more diverse solutions. Some other references [34, 35] propose different improvements. The experimental results show that a correct archive management strategy gives the algorithm better optimization performance and diversity.
The algorithms mentioned above have shown excellent performance in solving MOPs; however, these proposed methods lack more or less comprehensiveness. Through the previous discussion, the difficulties of MOPSO can be summed up in two aspects. First, overemphasis on diversity in the maintenance strategy of the archive may lead to insufficient convergence accuracy of the final approximate PF; conversely, overemphasis on convergence may lead to narrow coverage of the true PF, so the control of the degree of balance between convergence and diversity is always an open challenge. Second, define the ratio of global best and personal best in particle learning intensity; the next direction of movement of the particles will be reciprocally influenced by both. The personal best of most algorithms can only choose from the latest position and the historical optimal location, which limits the learning range of particles to some extent. Results in a situation where invalid guidance exists. This article proposes a hyperplane-assisted multi-objective particle swarm optimization with a twofold proportional assignment strategy, named tpahaMOPSO. It is designed to solve or mitigate the above challenges and improve the overall performance of MOPSO. In the proposed tpahaMOPSO, three main consideration strategies are presented. More specifically, the primary contributions of this research are outlined below:
-
The optimal technique was developed to maintain the archive. A convergence evaluation way based on hyperplane and a diversity evaluation way based on shift is designed. Combining the results of the two methods, the promising solution is retained. In addition, in order to focus on convergence in the early iteration, after the convergence of the algorithm is stable, to focus on diversity in the later iteration, time-varying weights are combined in the technique.
-
A twofold proportional assignment scheme is put forward to update the global leader of all particles. The basic idea of this scheme is to allocate different numbers of particles to the gbest on the basis of the superiority of the non-dominated solutions in the archive, and more promising areas are exploited. Through the scheme, it can elevate the optimization ability and balance the convergence promotion and diversity preservation of tpahaMOPSO in large part.
-
A comprehensive pbest selection strategy is presented, which can slightly slow down the rate of particle polymerization. In the selection based on the Pareto domination relation, the convergence difference between the best position in the history of the particle and the most recent position is considered. In addition, when the particle’s pbest is the latest position, a new solution is reassigned to the particle as pbest according to a specific probability.
The outline of this article is a brief introduction to MOPs and PSO knowledge in Sect. 2 as preliminary. The detailed procedures of the proposed improvement MOPSO (tpahaMOPSO) are described in Sect. 3. In Sect. 4, a comparison experiment with existing MOPSOs and MOEAs was conducted. The simulation results were discussed and analyzed statistically to demonstrate the effective and excellent performance of tpahaMOPSO. Finally, this article makes a summary.
2 Related Work
2.1 Multi-objective Optimization Problems
Multi-objective optimization problems usually require concurrently minimizing or maximizing multiple objectives with inherent conflict [36]. In general, the mathematical formulas of a minimization MOP can be written as follows:
where \({\text{x}} = [x_{1} ,x_{2} , \ldots ,x_{n} ]\) indicates one bounded decision vector with n elements in the decision space \(\Omega \in R^{n}\).\(F({\text{x}}) \in R^{M}\) represents the objective vector includes M conflicting objectives, \(f_{{\text{m}}} ({\text{x)}}\) presents the m-th objective function. R is the real number set. \(g_{i} ({\text{x}})\) and \(h_{j} ({\text{x}})\) refers to the i-th inequality constraint and the j-th equality constraints, separately, p expresses the number of inequality constraints while q is the total number of equality constraints.
Assume that two different decision vectors \({\text{x}}^{1} = [x_{1}^{1} ,x_{2}^{1} , \ldots ,x_{n}^{1} ]\) and \({\text{x}}^{2} = [x_{1}^{2} ,x_{2}^{2} , \ldots ,x_{n}^{2} ]\),\({\text{x}}^{1} ,{\text{x}}^{2} \in \Omega\), are both feasible solutions of the above MOP. It can be said that \({\text{x}}^{1}\) Pareto dominates \({\text{x}}^{2}\), denoted as \({\text{x}}^{1} \prec {\text{x}}^{2}\), when the relationship between their correspond objectives satisfies: \(\forall a \in \left\{ {1,2, \ldots ,M} \right\},f_{a} ({\text{x}}^{1} ) \le f_{a} ({\text{x}}^{2} )\) and \(\exists a \in \left\{ {1,2, \ldots ,M} \right\},f_{a} ({\text{x}}^{1} ) < f_{a} ({\text{x}}^{2} )\). Then, a solution \({\text{x}}^{1}\), if and only if there does not exist any other solutions such that \({\text{x}}^{2} \prec {\text{x}}^{1}\) is regarded as a non-dominated solution also known as a Pareto optimal solution, and all such solutions constitute a Pareto optimal solution set, abbreviated as PS. The interface formed by PS in objective space is termed the Pareto front (PF), and it will be defined as: \(PF = \left\{ {F({\text{x}})|{\text{x}} \in PS} \right\}\).
2.2 Particle Swarm Optimization
PSO is an evolutionary computation technique based on stochastic population intelligence optimization [37], where each particle represents a potential solution and learns from the global best experience identified by the entire swarm and the personal best experience to explore the solution space via collaborative flying. The learning model of PSO simulates the movements of a swarm of birds or schools of fish that aim to find food. \({\text{x}}^{i} (t) = [x_{1}^{i} (t),x_{2}^{i} (t), \ldots ,x_{D}^{i} (t)]\) represents the position vector of \(i\)-th particle at the t-th iteration, where \(i = 1,2 \ldots ,N\) is the index of every particle, \(N\) is the size of the swarm, D is the dimension of the searching space,\(t = 1,2 \ldots ,T\), and T represents the number of the final generation. Similarly, the \(i\)-th particle velocity vector is presented as \({\text{v}}^{i} (t) = [v_{1}^{i} (t),v_{2}^{i} (t), \ldots ,v_{D}^{i} (t)]\). The personal historical best experience for \(i\)-th particle is recorded as \(pbest^{i} (t) = [p_{1}^{i} (t),p_{2}^{i} (t), \ldots ,p_{D}^{i} (t)]\), and the global best experience for \(i\)-th particle is denoted as \(gbest^{i} (t) = [g_{1}^{i} (t),g_{2}^{i} (t), \ldots ,g_{D}^{i} (t)]\). In each iteration, each dimensional update rules of the position and the velocity are given in the second equation and the third equation as follows:
where \(\omega\) is the inertia constant coefficient controlling the impact of previous velocities on the current velocity, constants \(c_{1}\),\(c_{2}\) are two impact factors reflecting the weighting of self-cognitive and social cognitive learning, and \(r_{1}\),\(r_{2}\) are two distinct random decimals uniformly distributed in the interval \([0,1][0,1]\). When PSO is applied to solve MOPs, some components of MOPSO should be modified to make PSO suitable for solving MOPs.
3 The Proposed tpahaMOPSO
The main goal of tpahaMOPSO is to balance the convergence and diversity of populations during evolution, and it consists of three components: management of the external archive, global leader selection, and personal leader selection. These three components will be discussed in detail in the following description. In addition, for intuitive understanding, the relationship between these three components is shown in Fig. 1.
3.1 Maintenance Strategy of External Archive
The external archive exports the final solution set obtained by the algorithm, and its proper management is crucial to the performance of the algorithm. Most MOPSOs archive maintenance policies only consider the diversity of solutions, which may result in the removal of solutions with excellent convergence, so when the archive is maintained because the number of obtained non-dominated solutions exceeds a predefined maximum threshold, the convergence of solutions should be considered as well as the diversity of solutions. Therefore, the archive maintenance strategy in this paper evaluates both convergence and uniformity of solutions to determine which solutions are retained or deleted. The greater the diversity of candidate solutions in the external archive, the lower the similarity between candidate solutions, and the more diverse search directions for the particles. According to Ref. [33], if the vector angle between two solutions is smaller, the search direction of the two solutions is more similar. If a large number of solutions with similar search directions are kept in the archive, the evolutionary efficiency of the population will be reduced and computational resources will be wasted. Therefore, in tpahaMOPSO, when the archive needs to be trimmed, the vector angle between any two solutions is first calculated to find the two solutions with the smallest vector angle, that is, the solutions with the most similar search direction. In the objective space, the angle between the target vectors of any two solutions can be calculated using the following formula:
where \(\left\| {F(x)} \right\|\) represents the L2 norm of the target vector for solution \(x\).
When the two solutions with the most similar search direction are found, the performance of the two solutions needs to be evaluated, and the poor performance of the solution is deleted. In some literature [24, 38], the distance from the solutions to the ideal point is used to measure the convergence of the solutions. In Ref. [39], the author uses the solutions in the archive to construct a hyperplane, calculating the distance from the solutions to the plane to preserve the knee points with better convergence. Inspired by these proposed works, in this work, the hyperplane equation is established using the projection points of the ideal point on the objective axes, and then the convergence of the solutions is measured by calculating the distance from the solutions to the plane. The steps to construct a hyperplane are as follows: first, the ideal point \(Z* = (z_{1} ,z_{2} , \ldots ,z_{{\text{M}}} )\) is approximated from the solutions in the external archive A. Since the ideal point of a MOP is determined by the minimum value of each objective function. Generally speaking, the true PF of a MOP is unknown, so the ideal point is also unknown. Thus, the ideal point can only be approximated from the non-dominated solutions in the external archive, where \(z_{{\text{m}}} = \min f_{{\text{m}}} (x),x \in A,m = 1,2, \ldots ,M\). Then, the ideal point \(Z^{ * }\) is projected onto M objective axes in turn to obtain M projection points \(P_{1} = (p_{1} ,0, \ldots ,0)\), \(P_{2} = (0,p_{2} , \ldots ,0)\),…, \(P_{{\text{M}}} = (0,0, \ldots ,p_{{\text{M}}} )\). Solving the equation \(\sigma^{T} x + b = 0\) by substituting M points \(P_{1} ,P_{2} , \ldots P_{M}\). Thus, the hyperplane equation is established (when \(M = 2\), the equation is a linear equation, when \(M = 3\), the equation is a plane equation, and when \(4 \le M\), the equation is a hyperplane equation). The process of building a hyperplane by projecting an ideal point \(Z^{ * }\) onto the target axes in two-dimensional space can be visually seen in Fig. 2.
Next, we can calculate the distance to the hyperplane equation to measure the proximity of the solutions; the closer the solutions are to the plane, the better the convergence. The distance calculation equation is expressed as
where \(\sigma\) is the normal vector of the plane, \(b\) is the intercept from the plane to the origin, and \(\sigma\) and \(x\) are the column vectors.
For the performance measurement of the solution, the shift-based density estimation (SDE) [40] method has been applied in many works [41, 42], and SDE usually needs to move the position of other solutions according to its objective value before calculating the density of the solution. In this paper, the SDE measurement method is used to evaluate the diversity of solutions. Before calculating the density of solutions, it is not only necessary to move the position of other solutions according to the objective value of the solution but also to normalize the objective value of the solutions to eliminate the influence of different dimensions that may exist. The diversity of solutions is measured as follows:
where \(f_{{\text{m}}}^{\prime} (x)\) represents the objective value of m-th after the normalization of solution \(x\),\(sf_{{\text{m}}}^{\prime} (x^{2} )\) is the m-th position of the solution \(x^{2}\) after it has been moved. The larger the SDE value is, the smaller the density of the solution and the better the performance of the solution.
After the convergence and diversity of the solutions are measured according to the appeal method, the two results need to be integrated. Generally speaking, different emphasis should be placed on the performance of the solution at different stages. The convergence of the solution should be emphasized in the early stage of the evolutionary process, and the diversity of the solution should be emphasized when the convergence of the solution is stable. This ensures that the solution approaches the true PF quickly in the early stages and focuses on obtaining a uniformly distributed solution set in the later stages. Such a process is defined in Eq. (9):
Equation (9) provides a comprehensive way to measure the convergence and diversity of solutions. The first half of the equation finds the inverse value of SDE and adds 1 to the denominator to avoid a denominator of 0, which ensures that the range of values before and after is not too different. \(FE\) is fitness evaluation, and \(\max FE\) is the maximum fitness evaluation number. For minimized MOP, the smaller the value of \(CD\) is the better the overall performance of the solution, and such solutions can be preferentially retained. According to the above discussion, the archive maintenance strategy in this paper balances the convergence and diversity of algorithms in a way, and its specific process is shown in Algorithm 1.

Archive management
3.2 Selection Strategy for Gbest
The locations of the non-dominated solutions in the archive are the most promising regions, and the performance of different candidate solutions is also different. The better the performance of a candidate solution, the greater the search hope of its region is, while the worse the performance, the smaller the hope. In tpahaMOPSO, a twofold proportional assignment strategy is proposed, which assigns a corresponding number of particles to each non-dominated solution according to the degree of excellence of the different solutions in the archive. Therefore, we use the SDE above to measure the distribution of the solution, and the number of particles assigned to each solution in the archive based on the SDE value is calculated as follows:
The convergence of the solutions is measured by the combination of the average ranking (AR) and the global detriment (GD) in the study [32]. The AR and GD of each Pareto optimal solution are calculated as
The former will cluster individuals in areas close to the objective, and the latter may ignore individuals at the edge of PF, so combining the advantages of both can comprehensively evaluate the convergence of each solution in the archive. The combination mode is Eq. (13). The calculation of the number of particles assigned to each non-dominant solution based on the AG value of each solution in the archive is presented in Eq. (14).
The \(N\) in the above equations represents the size of the population, and \(n\) is the number of particles assigned to the j-th solution \(a_{j}\) in the archive. The switch between the two measures is not complicated, but simply and directly depends on whether the newly generated non-dominated solutions dominate the solutions in the archive, if so, the particles are assigned on the basis of the AG value of the non-dominated solutions, otherwise according to the SDE value. The SDE value and AG value can provide information about the distribution and convergence of the solution, and the larger the SDE value and AG value of a solution, the more particles will follow to learn, which can maintain the convergence and diversity of the newly discovered non-dominant solution. The adaptive transformation of SDE and AG can better emphasize the diversity and convergence of solutions.
3.3 Selection Strategy for Pbest
The selection of the pbest is a fundamental step for MOPSO. Most MOPSOs determine the pbest for each particle based on the Pareto domination relationship between its current position and historical pbest. When the relationship is not dominated by each other, pbest is updated by random selection. In tpahaMOPSO, in order to make the selection of pbest more reasonable and reliable, when the relationship between the current position of the particle and the historical pbest is not dominated by each other, their calculation method of convergence difference is calculated is as follows:
where \(p^{i} (t - 1)\) is the historical pbest of \(i\)-th particle, \(x^{i} (t)\) is the current position of \(i\)-th particle. The smaller the convergence difference, the shorter the distance that each dominates the other. At this point, the location with the least convergence difference is selected as the new pbest, which can effectively promote the evolution of particles.
Furthermore, in order to balance the process of population evolution, fast convergence leads to the loss of diversity, or the emphasis on diversity leads to a low approximation of the final solution to the true front. This paper extends the range of historical optimal models that particles will learn to candidate solutions in an external archive, improving the utilization of existing favorable resources. When the personal best of most particles in the population is the current location, these particles have only one learning object, the global optimal, which increases the risk of particles aggregating together. In order to avoid this risk, in tpahaMOPSO, when the particle’s pbest is the current position, an alternative solution closest to the particle is selected as the new pbest from the external archive based on 0.5 probability, which can slightly slow down the loss of diversity due to too fast convergence of the algorithm. The detailed steps for updating pbest are shown in Algorithm 2.

Comprehensive selection of pbest
3.4 The Framework of tpahaMOPSO
The main components of tpahaMOPSO, including the combination of angle, SDE, hyperplane-assisted archive update strategy, gbest twofold proportional allocation, and pbest comprehensive selection have been described in the above sections. These three parts can increase the searching performance of tpahaMOPSO. Furthermore, the pseudocode of tpahaMOPSO is given in Algorithm 3.

Framework of tpahaMOPSO
4 Experimental Studies and Analysis
In this section, in order to get a comprehensive evaluation of the proposed tpahaMOPSO in this paper. Three extensively used different benchmark test suites, ZDT [43], UF [44], and DTLZ [45], are employed to assess the performance of tpahaMOPSO in the experimental comparisons, where five ZDT (ZDT1–ZDT4, ZDT6) benchmark functions and seven UF (UF1–UF7) benchmark functions are the bi-objective problems, three UF (UF8–UF10) benchmark functions and seven DTLZ (DTLZ1-DTLZ7) benchmark functions are the 3 objective problems. It is worth mentioning that the discrete optimization problem ZDT5 has been omitted. DTLZ8 and DTLZ9 are two optimization problems with constraints; hence, they are not included. Meanwhile, to further demonstrate the search performance of the proposed tpahaMOPSO, the corresponding contrast algorithms are selected as competitors, including five existing peer algorithms, CMOPSO [46], NMOPSO [47], MOPSO [16], SMOPSO [48], and MOPSOD [49], and five competitive MOEAs, namely MOEA/D [10], NSGAII [7], SPEAR [50], DGEA [51], and PREA [52]. Moreover, to ensure the validity and fairness of the experiment, all simulations are performed on MATLAB R2020b and run on an Intel (R) Core (TM) i7-8750H CPU at 2.20 GHz on a computer with the Windows 10 operating system. The corresponding experimental results of all comparison algorithms are obtained by PlatEMO [53].
4.1 Performance Metrics
In this study, two comprehensive metrics that are widely used in the field of multi-objective optimization, inverted generational distance (IGD) [54] and hypervolume (HV) [55], are utilized to measure the evolutionary performance of all test algorithms in comparison experiments.
The IGD is generally employed to find the minimum distance from the actual Pareto optimal front point to the Pareto approximate front, so as to reflect the comprehensive measure of convergence and the diversity of optimal solutions. The calculation of IGD can be presented as
where A is the final set of solutions derived by the algorithm evolution, \({\text{P*}}\) is a set of evenly sampled points on the true Pareto front, and \({\text{|P*}}|\) is the number of sampling points. \(\min_{dist} (x,{\text{A}})\) refers to the minimum Euclidean distance between the solution \(x\) and its nearest solutions in A. The smaller the IGD value, which means to better the quality of solutions in A.
In order to avoid the singularity of only IGD metric evaluation, the HV is also used to evaluate convergence and diversity by calculating the hypervolume of the objective space dominated by the final set of solutions and bounded by the reference point, and it is shown as
where \(L( \cdot )\) represents the Lebesgue measure. \({\text{A}}\) is the final set of solutions derived by the algorithm. \({\text{|A|}}\) is the number of solutions in \({\text{A}}\). \(V_{j}\) denotes the hypervolume of the \(j\)-th solution in \({\text{A}}\) and predefined reference points. HV is the opposite of IGD principle, the larger the HV value, the better quality of the final set of solutions will be.
4.2 Parameter Settings
Ten contrast algorithms are compared with the proposed tpahaMOPSO experimentally. To achieve the comparisons meaningful and fair, all the relevant configuration parameters in participant algorithms are consistent with their original references; the main parameter values for the different algorithms are shown in Table 1. The parameter settings of the 22 benchmark problems are listed in Table 2, N is the number of particles in the population, M is the number of objective functions, and D represents the dimension of the decision variable. The termination condition is FEs, which denotes the maximum number of fitness evaluations. In the presentation of Table 2, ZDT1, ZDT2, and ZDT3 contain 30-D decision variables, ZDT4 and ZDT6 contain 10-D decision variables, the standard form of UF series test functions contains 30-D decision variables, and DTLZ1 contains 7-D decision variables, DTLZ2-DTLZ6 contain 12-D decision variables, and DTLZ7 contains 22-D decision variables. Each algorithm was independently run 30 times on each test problem to obtain more comprehensive experimental results.
4.3 Comparative Experiments with MOPSOs
The average and standard deviation values of the IGD and HV on ZDT1-4, 6, UF1-10, and DTLZ1-7 over 30 independent runs obtained by tpahaMOPSO and the other five MOPSOs (CMOPSO, NMPSO, MOPSO, SMPSO, and MPSOD) are given in Tables 3 and 4, where the best results of each test problem are highlighted in black bold. It can be seen in Table 3 that the proposed tpahaMOPSO has four best IGD results on five ZDT functions, and six best IGD results on ten UF functions; it obtained the minimum IGD value on the DTLZ6 function. Specifically, it is clear that tpahaMOPSO can capture a smaller average IGD metric than the compared MOPSOs on the set of ZDT benchmark functions (except ZDT6); its performance is only second to CMOPSO on the ZDT6 problem. It can be observed that tpahaMOPSO has better IGD performance than the competitors on 3 problems (UF3, UF4, and UF7), and significantly outperforms the other algorithms on UF8, UF9, and UF10 in the UF benchmark function suites with 3 objectives. It owns six optimal mean IGD results on ten UF test instances. For UF4 and UF8 problems, tpahaMOPSO got the best mean values, while MOPSO gained the optimal std values. For DTLZ benchmark function suites with 3 objectives, tpahaMOPSO performs far better than the comparison of five algorithms on the DTLZ6 problem, and its performance is second only to NMPSO on the DTLZ7 problem. In general, as seen in Table 3, the proposed tpahaMOPSO has the best IGD performance because it got the smallest IGD value on 11 benchmark functions in a total of 22 benchmark functions. CMOPSO has seven IGD best results, ranking first in the comparison algorithms, while SMPSO and NMPSO own three and one IGD best results, ranking second and third, respectively. As for MOPSO and MPSOD, they cannot do their best on any test problems. The results of the IGD comparison were analyzed; it can be concluded that the proposed tpahaMOPSO can achieve a more brilliant IGD performance than the other comparison MOPSOs.
It is seen in Table 4 on the HV metric that the proposed tpahaMOPSO can capture bigger HV values on ZDT functions (except CMOPSO on ZDT6) and achieve better mean HV performance for the UF functions (except UF1, UF2, UF5, and UF6). The HV performance significantly outperformed the compared MOPSOs on the DTLZ6 function. CMOPSO, NMPSO, SMPSO, and tpahaMOPSO obtained HV values that are only slightly different on the ZDT6 function. This situation is consistent with the IGD results that tpahaMOPSO obtained a more evenly distributed solution set across 11 benchmark problems than five competitors.
To visually show the convergence and distribution performance of the proposed tpahaMOPSO, the running results of tpahaMOPSO and five comparison MOPSOs on ZDT3, UF9, and DTLZ6 benchmark functions are displayed in Figs. 3, 4, and 5 severally. It can be found directly from Fig. 3 that only CMOPSO, NMPSO, and tpahaMOPSO can approach the PF of ZDT3; however, the approximate PF obtained by NMPSO has a poor distribution. For the UF9 problem, it can be observed that all the algorithms can approach the real PF in Fig. 4, among which, the PSs obtained by CMOPSO and MPSOD are widely distributed in the objective space but have poor convergence, while the results of NMPSO and MOPSO are too aggregated, which only covers a part of the actual PF. Only the solution set of tpahaMOPSO has good convergence and wide distribution on the PF of UF9, followed by SMPSO. CMOPSO, NMPSO, and tpahaMOPSO obtained better convergence results than MOPSO, SMPSO, and MPSOD on the PF of the DTLZ6 function in Fig. 5. CMOPSO and tpahaMOPSO can obtain solutions that are evenly distributed on the true PF, whereas the approximate PF of NMPSO has insufficient distribution, the solution sets of MOPSO and MPSOD are similar and widely distributed but do not converge to the real PF. The results indicate that tpahaMOPSO can achieve superior convergence and distribution performance than other comparison MOPSOs on the benchmark functions. In addition, IGD convergence curves of the proposed tpahaMOPSO and five comparison MOPSOs on ZDT3, UF9, and DTLZ6 test problems are presented in Fig. 6. It is intuitive that tpahaMOPSO has faster convergence and better optimization performance for different test problems.
Finally, to demonstrate the excellent stability of tpahaMOPSO on different test problems, the box plots of IGD metric values obtained by six PSO-based algorithms on ZDT and UF benchmark suites, DTLZ6 and DTLZ7 benchmark functions, are presented in Fig. 7, where 1, 2, 3, 4, 5, and 6 of the horizontal coordinates of box plots are CMOPSO, NMPSO, MOPSO, SMPSO, MPSOD, and tpahaMOPSO, severally. As can be found from Fig. 7, the solutions obtained by tpahaMOPSO can remain in a more stable range when solving most test problems, indicating that it has better stability and convergence accuracy than the other five contrast MOPSOs on most test problems.
4.4 Comparative Experiments with MOEAs
Tables 5 and 6 show the mean and standard deviation values of the IGD and HV obtained by tpahaMOPSO, and five comparison MOEAs (MOEA/D, NSGAII, SPEAR, DGEA, and PREA) run 30 times independently on ZDT1-4, 6, UF1-10, and DTLZ1-7, respectively. The best results on each test function are prominently shown in bold. As can be seen from Table 5, tpahaMOPSO obtained the four best IGD values on the ZDT test suite, and the results were far better than the comparison MOEAs. This shows that tpahaMOPSO’s archive maintenance scheme plays a major role during the evolutionary process, which makes the final solution set of higher quality and better performance. For the ZDT4 test instance, tpahaMOPSO has weaker IGD performance than MOEA/D and NSGAII. On the UF series of test functions, tpahaMOPSO got half of the smallest IGD values, and for the UF10 function, its IGD metric was second only to MOEA/D. For DTLZ benchmark problems with 3 objective functions, tpahaMOPSO got the smallest IGD result on the DTLZ6 test example; as for the DTLZ7 function, it obtained the smallest IGD average, while NSGAII obtained the smallest standard value. Among the total 22 test functions, tpahaMOPSO obtained half of the best IGD results. The comparison algorithms also perform well on some problems, such as NSGAII obtained the six best IGD values, second only to the proposed tpahaMOPSO in this paper; SPEAR and PREA had the best IGD performance on DTLZ4 and UF5 issues, respectively; and MOEA/D obtained the three best results. As for DGEA, it did not perform the best on each test problem. Therefore, it can be concluded that the distribution and convergence of tpahaMOPSO are superior to those of the five competing MOEAs.
It can be shown in Table 6 that the results of HV and IGD of tpahaMOPSO on three test suites are different. The HV performance of tpahaMOPSO in the ZDT test suite is consistent with the IGD metric. For the UF benchmark function suite, the HV performance for tpahaMOPSO is higher than that of the other comparison MOEAs on UF3, UF4, and UF7-10 problems; although its IGD performance is weaker than MOEA/D for the UF10 problem, HV is better than MOEA/D. The excellent HV index values of tpahaMOPSO are the same as the IGD indicator on the DTLZ6 instance. For 5 comparison MOEAs, DGEA does not have the best IGD performance but has one of the highest HV values in DTLZ3. According to the analysis of the HV metric, tpahaMOPSO can obtain well-distributed optimal solutions for most benchmark functions.
In order to visually see the distribution of the results obtained by tpahaMOPSO and MOEAs on different test problems, Figs. 8, 9, and 10 show the approximate PFs acquired by tpahaMOPSO and MOEAs on disconnected ZDT3 problem, UF9 problem, and mostly degenerated DTLZ6 problem, severally. It can be observed from Fig. 8 that only the solutions set by tpahaMOPSO completely cover the real PF of ZDT3, which shows the effective ability of tpahaMOPSO to solve the ZDT3 benchmark problem. Among the five MOEAs, the approximate PF obtained by NSGAII is the best, but not as close as the approximate PF obtained by tpahaMOPSO. DGEA does not converge to real PF, and the solution set obtained by MOEA/D is clustered at both ends of the real PF, while the solutions obtained by PREA are clustered at one end of the real PF. For the UF9 benchmark problem, the images show that the distribution of non-dominant solution sets obtained by tpahaMOPSO is more even; it is also the closest to the true front. Among the different approximate PFs obtained by all algorithms on the DTLZ6 problem shown in Fig. 10, tpahaMOPSO and NSGAII have similar results, but the approximate PF uniformity of tpahaMOPSO is better. Furthermore, in order to compare the convergence rates of all algorithms, Fig. 11 shows the convergence trajectory of the IGD values between tpahaMOPSO and the five competitive MOEAs on three benchmark problems (ZDT3, UF9, and DTLZ6). It can be concluded that tpahaMOPSO can converge faster and get smaller results.
Finally, Fig. 12 shows the box plots of IGD values obtained by tpahaMOPSO and 5 contrast MOEAs run 30 times independently on different test samples, where 1, 2, 3, 4, 5, and 6 in the horizontal coordinates of the box diagrams represent MOEA/D, NSGAII, SPEAR, DGEA, PREA, and tpahaMOPSO, respectively. This is consistent with the analysis in Table 5, the final solution set of the proposed tpahaMOPSO and other 5 competitive MOEAs; the solutions of the algorithm presented in this paper are the most stable.
4.5 Parameter Sensitivity Analysis of tpahaMOPSO
In the proposed tpahaMOPSO, there are three input parameters w, c1, and c2 that have a certain influence on the performance of the algorithm. The inertia weight w represents the effect of the particle’s previous velocity on the current generation velocity. c1 and c2 are two learning factors, which reflect the degree of particle learning from pbest and gbest, respectively. A larger c1 value may reduce the convergence speed since particles are always in a global exploration state; in contrast, a larger c2 value may cause premature convergence due to the loss of swarm diversity. Therefore, we perform experiments and analyses of the parameters w, c1, and c2 in this subsection. We set the value of the parameter w from small to large to 0.2, 0.4, 0.6, 0.8, and 1, and change the values of the two learning factors c1 and c2 to 0.5, 1, 1.5, 2, and 2.5, respectively. According to different parameter values, the algorithm was independently run 30 times on all test problems to obtain average IGD values, which are shown in Tables 7, 8, and 9.
It can be observed from Table 7 that tpahaMOPSO performs better on most of the test problems when the values of w are set to 0.2 and 0.4. In the ZDT series of test functions, tpahaMOPSO is relatively sensitive to the value of w set on the ZDT4 problem; when the value of w is larger, the convergence of the algorithm is worse. Contrary to the performance of tpahaMOPSO on the ZDT4 problem, the algorithm has better convergence on the DTLZ7 problem when the w value is large. In addition, for the ZDT2, ZDT3, and DTLZ6 test functions, although their optimal values are all when the value of w is large, the data show that there is not much difference between the IGD values of tpahaMOPSO. In conclusion, it is reasonable to set the inertia constant coefficient w of tpahaMOPSO to 0.2 or 0.4.
For different c1 and c2 values, it can be seen from Tables 8 and 9 that in the ZDT series of test functions, the proposed tpahaMOPSO demonstrates robust performance to the settings of c1 and c2 on the ZDT6 problem. In addition, tpahaMOPSO is insensitive to changes in c1 values. However, in Table 9, when the value of c2 decreases, the information of the global optimal solution obtained by particles decreases, which affects the convergence of the algorithm on ZDT1, ZDT2, and ZDT3 problems. For UF series test functions, different c1 values have a certain impact on the performance of tpahaMOPSO, but it can be observed from Table 8 that no matter the value of c1 is larger or smaller, the IGD values obtained by the algorithm for different c1 settings are all close to the optimal IGD values. On the other hand, changing the value of c2 has a more significant effect on the performance of the algorithm in the UF1 to UF9 test problem. Finally, tpahaMOPSO still has stable performance for different c1 and c2 settings on the DTLZ6 problem. For the DTLZ1 and DLTLZ3 benchmark functions, tpahaMOPSO still shows poor convergence regardless of the values of c1 and c2. For the DTLZ7 test problem, a larger c1 and c2 value is beneficial for the convergence and diversity of the algorithm. In summary, it can be concluded that c1 and c2 values set to 2 are reasonable and beneficial for the performance of the algorithm.
4.6 Friedman Rank Test
The Friedman test [56] was used for ranking analysis, in order to gain a comparative statistical analysis of the results in Tables 3, 4, 5, and 6. The average ranking of the IGD indicator Friedman test of ZDT, UF, and DTLZ of all participating algorithms is shown in Table 10, and Table 11 presents the comparative ranking results of HV. The results can be observed from Tables 10 and 11; both P-values are less than 0.05, so it can be concluded that there exist statistically significant differences between algorithms. In addition, the proposed tpahaMOPSO achieves the best rank on ZDT benchmark functions regarding IGD and HV; for UF test problems, both tpahaMOPSO and NSGAII obtain the best rank on IGD; in terms of overall rank, both tpahaMOPSO and NSGAII receive the best rank regarding IGD. For HV, only tpahaMOPSO ranks first. According to the Friedman test, both the tpahaMOPSO proposed in this paper and NSGAII perform best. However, three of the former ranked first, while only two of the latter ranked first. Therefore, overall, the proposed tpahaMOPSO has better performance.
5 Conclusion
This paper presents a hyperplane-assisted multi-objective particle swarm optimization with a twofold proportional assignment strategy, termed tpahaMOPSO. First, at each iteration of the population, more particles are assigned to search the surrounding region of candidate solutions with better performance to explore and exploit more promising search areas. In addition, by combining the convergence difference of particles and the domination relationship, the historical best is selected more reasonably and reliably, and to decrease the risk of particle aggregation, the pbest selection range is expanded. Next, for the maintenance of the external archive, the hyperplane is established using an ideal point, the approximation of solutions to PF is evaluated, and besides, the shift-based density estimation is reasonably combined to effectively distinguish which of the two solutions with the most semblable search direction is better and should be retained. Finally, the experimental results on 22 benchmark functions demonstrate that the put forward tpahaMOPSO has a distinguished advantage in terms of the trade-off of convergence and diversity. Furthermore, compared with other MOPSOs and MOEAs, the proposed tpahaMOPSO also has a stronger competitiveness for dealing with MOPs.
Although the excellent performance of the proposed tpahaMOPSO in this paper has been verified on some benchmark functions, the algorithm still has some shortcomings and room for improvement. For example, the potential applicability and effectiveness of this algorithm in realistic multi-objective problems need further research and verification. In addition, due to the maximum fitness evaluation number and some Pareto solutions being used in the maintenance method of external archives, it will bring difficulties to the application of the algorithm, because for the general multi-objective engineering optimization problems, the maximum fitness calculation number to fit the problem is unknown. A major limitation of the proposed tpahaMOPSO is that the archive maintenance strategy relies too much on predetermined input parameters. This limitation of the proposed tpahaMOPSO is the focus of our next work.
In our future research work, we hope to combine excellent ideas to alleviate the limitations of the proposed tpahaMOPSO for general engineering optimization problems and study problem-independent archive maintenance strategies to reduce the dependence of the algorithm on predetermined parameters. On the other hand, how to further improve the learning efficiency and convergence speed of the algorithm is also a difficult point that we focus on.
Data Availability
No datasets were generated or analyzed during the current study.
References
Xu, X.F., et al.: Multi-objective particle swarm optimization algorithm based on multi-strategy improvement for hybrid energy storage optimization configuration. Renew. Energy. 223, 120086 (2024). https://doi.org/10.1016/j.renene.2024.120086
Bakır, H., et al.: Dynamic switched crowding-based multi-objective particle swarm optimization algorithm for solving multi-objective AC-DC optimal power flow problem. Appl. Soft. Comput. 166, 112155 (2024). https://doi.org/10.1016/j.asoc.2024.112155
Zhong, R., et al.: Q-learning based vegetation evolution for numerical optimization and wireless sensor network coverage optimization. Alex. Eng. J. 87, 148–163 (2024). https://doi.org/10.1016/j.aej.2023.12.028
Huang, W., Zhang, W.: Multi-objective optimization based on an adaptive competitive swarm optimizer. Inf. Sci. 583, 266–287 (2022). https://doi.org/10.1016/j.ins.2021.11.031
Han, H., et al.: Robust Multiobjective particle swarm optimization with feedback compensation strategy. IEEE. T. Cybern. (2023). https://doi.org/10.1109/TCYB.2023.3336870
Shao, Y., et al.: Multi-objective neural evolutionary algorithm for combinatorial optimization problems. IEEE. Trans. Neural. Netw. Learn. Syst. 34(4), 2133–2143 (2021). https://doi.org/10.1109/TNNLS.2021.3105937
Deb, K., et al.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE. Trans. Evol. Comput. 6(2), 182–197 (2002). https://doi.org/10.1109/4235.996017
Zitzler, E., Laumanns, M., Thiele, L.: SPEA2: Improving the strength Pareto evolutionary algorithm. TIK report. (2001). https://doi.org/10.3929/ethz-a-004284029
Zitzler, E., Künzli, S.: Indicator-based selection in multiobjective search. In: International conference on parallel problem solving from nature. pp. 832-842. Springer Berlin Heidelberg (2004)s
Zhang, Q., Li, H.: MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 11(6), 712–731 (2007). https://doi.org/10.1109/TEVC.2007.892759
Yang, L., Zhang, Y., Cao, J., Li, K., Wang, D.: A many-objective evolutionary algorithm based on reference vector guided selection and two diversity and convergence enhancement strategies. Appl. Soft Comput. 154, 111369 (2024). https://doi.org/10.1016/j.asoc.2024.111369
Xu, Y., et al.: An adaptive convergence enhanced evolutionary algorithm for many-objective optimization problems. Swarm Evol. Comput. 75, 101180 (2022). https://doi.org/10.1016/j.swevo.2022.101180
Luh, G.C., Chueh, C.H., Liu, W.W.: MOIA: multi-objective immune algorithm. Eng. Optimiz. 35(2), 143–164 (2003). https://doi.org/10.1080/0305215031000091578
Hancer, E.: A new multi-objective differential evolution approach for simultaneous clustering and feature selection. Eng. Appl. Artif. Intell. 87, 103307 (2020). https://doi.org/10.1016/j.engappai.2019.103307
Liu, J., Liu, J.: Applying multi-objective ant colony optimization algorithm for solving the unequal area facility layout problems. Appl. Soft Comput. 74, 167–189 (2019). https://doi.org/10.1016/j.asoc.2018.10.012
Coello, C.A.C., Pulido, G.T., Lechuga, M.S.: Handling multiple objectives with particle swarm optimization. IEEE Trans. Evol. Comput. 8(3), 256–279 (2004). https://doi.org/10.1109/TEVC.2004.826067
Zhong, R., Zhang, C., Yu, J.: Hierarchical RIME algorithm with multiple search preferences for extreme learning machine training. Alex. Eng. J. 110, 77–98 (2025). https://doi.org/10.1016/j.aej.2024.09.109
Huang, H., et al.: Comprehensive multi-view representation learning via deep autoencoder-like nonnegative matrix factorization. IEEE Trans. Neural Netw. Learn. Syst. (2024). https://doi.org/10.1109/TNNLS.2023.3304626
Huang, H., et al.: Diverse deep matrix factorization with hypergraph regularization for multi-view data representation. IEEE-CAA J. Automatica Sin. 10(11), 2154–2167 (2023). https://doi.org/10.1109/JAS.2022.105980
Kennedy, J., Eberhart, R (1995) Particle swarm optimization. In: Proceedings of ICNN'95-international conference on neural networks. IEEE, 4 1942 1948
Wang, Y., et al.: A new two-stage based evolutionary algorithm for solving multi-objective optimization problems. Inf. Sci. 611, 649–659 (2022). https://doi.org/10.1016/j.ins.2022.07.180
Lin, Q., et al.: A novel multi-objective particle swarm optimization with multiple search strategies. Eur. J. Oper. Res. 247(3), 732–744 (2015). https://doi.org/10.1016/j.ejor.2015.06.071
Figueiredo, E.M.N., Ludermir, T.B., Bastos-Filho, C.J.A.: Many objective particle swarm optimization. Inf. Sci. 374, 115–134 (2016). https://doi.org/10.1016/j.ins.2016.09.026
Wu, B., et al.: Adaptive multiobjective particle swarm optimization based on evolutionary state estimation. IEEE T. Cybern. 51(7), 3738–3751 (2019). https://doi.org/10.1109/TCYB.2019.2949204
Han, H.G., et al.: Adaptive candidate estimation-assisted multi-objective particle swarm optimization. Sci China Tech Sci. 65(8), 1685–1699 (2022). https://doi.org/10.1007/s11431-021-2018-x
Li, L., et al.: On the norm of dominant difference for many-objective particle swarm optimization. IEEE T. Cybern. 51(4), 2055–2067 (2019). https://doi.org/10.1109/TCYB.2019.2922287
Li, L., Wang, W., Xu, X.: Multi-objective particle swarm optimization based on global margin ranking. Inf. Sci. 375, 30–47 (2017). https://doi.org/10.1016/j.ins.2016.08.043
Huang, W., Zhang, W.: Adaptive multi-objective particle swarm optimization using three-stage strategy with decomposition. Soft. Comput. 25(23), 14645–14672 (2021)
Bai, X., et al.: A distribution-knowledge-guided assessment strategy for multiobjective particle swarm optimization. Inf. Sci. 648, 119603 (2023). https://doi.org/10.1016/j.ins.2023.119603
Han, H., et al.: Adaptive multiple selection strategy for multi-objective particle swarm optimization. Inf. Sci. 624, 235–251 (2023). https://doi.org/10.1016/j.ins.2022.12.077
Li, Y., Zhang, Y., Hu, W.: Adaptive multi-objective particle swarm optimization based on virtual Pareto front. Inf. Sci. 625, 206–236 (2023). https://doi.org/10.1016/j.ins.2022.12.079
Huang, W., Zhang, W.: Adaptive multi-objective particle swarm optimization with multi-strategy based on energy conversion and explosive mutation. Appl. Soft Comput. 113, 107937 (2021). https://doi.org/10.1016/j.asoc.2021.107937
Yang, L., Hu, X., Li, K.: A vector angles-based many-objective particle swarm optimization algorithm using archive. Appl. Soft Comput. 106, 107299 (2021). https://doi.org/10.1016/j.asoc.2021.107299
Hu, W., Yen, G.G.: Adaptive multiobjective particle swarm optimization based on parallel cell coordinate system. IEEE Trans. Evol. Comput. 19(1), 1–18 (2013). https://doi.org/10.1109/TEVC.2013.2296151
Han, H., Lu, W., Qiao, J.: An adaptive multiobjective particle swarm optimization based on multiple adaptive methods. IEEE T. Cybern. 47(9), 2754–2767 (2017). https://doi.org/10.1109/TCYB.2017.2692385
Feng, D., et al.: A particle swarm optimization algorithm based on modified crowding distance for multimodal multi-objective problems. Appl. Soft Comput. 152, 111280 (2024). https://doi.org/10.1016/j.asoc.2024.111280
Liu, Q., et al.: All particles driving particle swarm optimization: Superior particles pulling plus inferior particles pushing. Knowledge-Based Syst. 249, 108849 (2022). https://doi.org/10.1016/j.knosys.2022.108849
Liu, Y., et al.: A many-objective evolutionary algorithm using a one-by-one selection strategy. IEEE T. Cybern. 47(9), 2689–2702 (2017). https://doi.org/10.1109/TCYB.2016.2638902
Zhang, X., Tian, Y., Jin, Y.: A knee point-driven evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 19(6), 761–776 (2014). https://doi.org/10.1109/TEVC.2014.2378512
Li, M., Yang, S., Liu, X.: Shift-based density estimation for Pareto-based algorithms in many-objective optimization. IEEE Trans. Evol. Comput. 18(3), 348–365 (2013). https://doi.org/10.1109/TEVC.2013.2262178
Zhang, J., et al.: An angle-based many-objective evolutionary algorithm with shift-based density estimation and sum of objectives. Expert Syst. Appl. 209, 118333 (2022). https://doi.org/10.1016/j.eswa.2022.118333
Liu, Z.Z., Wang, Y., Huang, P.Q.: AnD: a many-objective evolutionary algorithm with angle-based selection and shift-based density estimation. Inf. Sci. 509, 400–419 (2020). https://doi.org/10.1016/j.ins.2018.06.063
Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: empirical results. Evol. Comput. 8(2), 173–195 (2000). https://doi.org/10.1162/106365600568202
Zhang, Q., et al.: Multiobjective optimization test instances for the CEC 2009 special session and competition. Mech Eng NY. 264, 1–30 (2008)
Deb, K., et al. Scalable test problems for evolutionary multiobjective optimization. In: Evolutionary multiobjective optimization, theoretical advances and applications, pp. 105-145. Springer, London (2005)
Zhang, X., et al.: A competitive mechanism based multi-objective particle swarm optimizer with fast convergence. Inf. Sci. 427, 63–76 (2018). https://doi.org/10.1016/j.ins.2017.10.037
Lin, Q., et al.: Particle swarm optimization with a balanceable fitness estimation for many-objective optimization problems. IEEE Trans. Evol. Comput. 22(1), 32–46 (2016). https://doi.org/10.1109/TEVC.2016.2631279
Nebro, A. J., et al.: SMPSO: A new PSO-based metaheuristic for multi-objective optimization. In: 2009 IEEE Symposium on computational intelligence in multi-criteria decision-making (MCDM), IEEE, pp. 66–73 (2009). https://doi.org/10.1109/mcdm.2009.4938830
Dai, C., Wang, Y., Ye, M.: A new multi-objective particle swarm optimization algorithm based on decomposition. Inf. Sci. 325, 541–557 (2015). https://doi.org/10.1016/j.ins.2015.07.018
Jiang, S., Yang, S.: A strength Pareto evolutionary algorithm based on reference direction for multiobjective and many-objective optimization. IEEE Trans. Evol. Comput. 21(3), 329–346 (2017). https://doi.org/10.1109/TEVC.2016.2592479
He, C., Cheng, R., Yazdani, D.: Adaptive offspring generation for evolutionary large-scale multiobjective optimization. IEEE Trans. Syst. Man Cybern. -Syst. 52(2), 786–798 (2020). https://doi.org/10.1109/TSMC.2020.3003926
Yuan, J., et al.: Investigating the properties of indicators and an evolutionary many-objective algorithm using promising regions. IEEE Trans. Evol. Comput. 25(1), 75–86 (2020). https://doi.org/10.1109/TEVC.2020.2999100
Tian, Y., et al.: PlatEMO: a MATLAB platform for evolutionary multi-objective optimization [educational forum]. IEEE Comput. Intell. Mag. 12(4), 73–87 (2017). https://doi.org/10.1109/MCI.2017.2742868
Bosman, P.A.N., Thierens, D.: The balance between proximity and diversity in multiobjective evolutionary algorithms. IEEE Trans. Evol. Comput. 7(2), 174–188 (2003). https://doi.org/10.1109/TEVC.2003.810761
Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 3(4), 257–271 (1999). https://doi.org/10.1109/4235.797969
Cui, Y., Meng, X., Qiao, J.: A multi-objective particle swarm optimization algorithm based on two-archive mechanism. Appl. Soft Comput. 119, 108532 (2022). https://doi.org/10.1016/j.asoc.2022.108532
Acknowledgements
This work was supported in part by Key Laboratory of Evolutionary Artificial Intelligence in Guizhou (Qian Jiaoji [2022] No. 059), Key Talens Program in digital economy of Guizhou Province and National Natural Science Foundation of China (NSFC 62062071).
Author information
Authors and Affiliations
Contributions
Qian Song and Yanmin Liu provided the main concept of this work and wrote the manuscript. Xiaoyan Zhang and Yansong Zhang made the experiments. All the authors reviewed the manuscript.
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Song, Q., Liu, Y., Zhang, X. et al. Hyperplane-Assisted Multi-objective Particle Swarm Optimization with Twofold Proportional Assignment Strategy. Int J Comput Intell Syst 17, 306 (2024). https://doi.org/10.1007/s44196-024-00702-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44196-024-00702-6