A dual-operator strategy for a multiobjective evolutionary algorithm based on decomposition
Introduction
A multi-objective optimization problem (MOP) has several objectives to be optimized [1], [2], [3]. An MOP can be stated as follows [4], [5], [6]: where is the decision variable vector and is a solution. constitutes objective functions.
Let and , if and only if for every and for at least one index , is said to dominate . A solution is the Pareto optimal point if no other can dominate . The set of all Pareto optimal points is called the Pareto set (PS) . Accordingly, , is called the Pareto front (PF) [7].
Many multi-objective optimization evolutionary algorithms (MOEAs) have been proposed in the last two decades. Most MOEAs – for example, NSGA-II [8] – are based on Pareto dominance to search the PS. In contrast, a multi-objective evolutionary algorithm based on decomposition (MOEA/D) [9] transforms an MOP into many single objective optimization subproblems and finds a set of solutions to approximate the PF in one single run. In the years since the MOEA/D was proposed, some variants based on MOEA/D have been proposed to solve complex MOP problems, such as MOEA/D-DRA [10] and MOEA/D-FRRMAB [11].
Typically, effective MOEAs have both good convergence ability and diversity ability. In evolutionary algorithms (EAs), a genetic operator often plays an important role in population evolution [12]. Generally, an EA only uses one genetic operator to generate offspring. In fact, the ability of a single genetic operator is often limited, so different MOPs are suitable for different genetic operators, and even different genetic operators can be adopted for different stages of the same problem. Therefore, fusing different operators has a good effect on complex MOP problems. Using different genetic operators generates better offspring to improve the population’s convergence, thereby improving the MOEAs’ performance.
Li et al. proposed the MOEA/D-FRRMAB, which combines multiple genetic operators into one framework and chooses the appropriate genetic operator at the appropriate time, thereby enhancing the MOEA’s effectiveness. Qi et al. [13] combined Differential Evolution (DE) and Simulated Binary Crossover (SBX) and randomly determined which genetic operator to use for a test instance. Lin et al. [14] proposed an adaptive method for selecting parameters and genetic operators based on the upper confidence bound (UCB) and prior parameter knowledge. Rostami et al. also achieved many excellent MOEAs when solving many-objective optimization problems [15], [16], [17]. Min-Rong Chen et al. [18] designed an adaptive hybrid genetic operation to update the population.
In recent years, many people have propose EAs based on machine learning. Lin et al.’s [19] proposed algorithm builds a classification model on the search space to filter all new generated solutions. Pan et al. [20] proposed a surrogate-assisted many-objective evolutionary algorithm that uses an artificial neural network to predict the dominance relationship between candidate solutions and reference solutions, instead of approximating the objective values separately. Zhang et al. [21] proposed a classification based pre-selection (CPS) strategy for evolutionary multi-objective optimization. The strategy uses two population labeled ‘good’ and ‘bad’ to trains a classifier, which is used to determine the quality of offspring. Machine-learning models have achieved great success in pre-selecting offspring, but it takes a long time to train the data set. Such EAs are more suitable for dealing with expensive problems and are unsuitable for solving problems that require a short time to obtain the calculation results.
This paper, under the MOEA/D framework, discusses whether co-evolution exists in both space-search strategies, and we propose a genetic operator based on the concept of co-evolution. In order to enhance operator’s space-search and co-evolution capabilities, we propose an adaptive parameter-selection strategy based on a learning strategy. In fact, we did not directly use machine learning to train the model, because the process of training machine learning itself takes a lot of long time.
The learning rate is an important concept in machine learning. Take the example of a gradient descent, which is a parameter optimization algorithm widely used to minimize model errors. In order to make improve the gradient descent method’s performance, the value of the learning rate must be set at an appropriate range. The learning rate determines how fast the parameter moves to the optimal value. If the learning rate is too high, then it is likely to exceed the optimal value. If the learning rate is too low, then the optimization efficiency may be too low, and the algorithm cannot converge for a long time. Thus, the learning rate is very important to an algorithm’s performance. In MOEA/D-DOS, first, we propose a genetic operator DE-SBX/either-or based on the concept of co-evolution and Duty Ratio. It uses the Duty Ratio parameter to decide the proportions of DE and SBX to be used in a current period, so the performance of DE-SBX/either-or is greatly affected by the Duty Ratio parameter. To improve the performance of DE-SBX/either-or, we use a parameter-adjustment method based on a learning strategy to find the best Duty Ratio parameter for the current problem. We performed some comparative experiments to verify the effectiveness of co-evolution and DOS. First, we found that DE-SBX/either-or showed the better results on a series of UF test instances than the DE and SBX did, indicating that the genetic operator based on co-evolution is effective. Afterwards, we determined that the DE-SBX/either-or with DOS strategy showed better results on a series of UF test instances than the DE, SBX, and DE-SBX/either-or methods, thus showing the effectiveness of the DOS strategy. Finally, we selected six sets of algorithms to compare with MOEA/D-DOS. The experimental results showed that MOEA/D-DOS outperformed than the other EAs used in our experimental studies.
Section snippets
DE-SBX/either-or and DOS
In this section, we propose a genetic operator, namely DE-SBX/either-or, which is not only very stable, but also has a fast search speed. We propose DOS based on DE-SBX/either-or and DOS involving adjusting the Duty Ratio parameter in DE-SBX/either to improve its performance.
Algorithm framework of MOEA/D-DOS
Based on the DOS, we propose MOEA/D-DOS based on the MOEA/D framework. The pseudo-code of MOEA/D-DOS is shown in Algorithm 3.
Comparison study
In this section, we study the performance of the DE-SBX/ either-or and DOS. We used MOEA/D in [9] as the algorithm framework to test the performance of DE-SBX/either-or and DOS. The parameters of MOEA/D are the same as in [9], and the parameters of DE, SBX and DE-SBX/either-or are same as follows:
(1) DE/rand/1: ,
(2) SBX:
(3) DE-SBX/either-or: , , ,
Experimental setting
For this section, we have compared MOEA/D-DOS with MOEA/ D-FRRMAB [11], GrEA [29], SPEA2SDE [30], IMMOEA [31], RMMEDA [32], and CAMOEA [33] to verify the performance of MOEA/D-DOS. All of the test instances were run on identical computers (i7-6700 processor, at 3.4 GHZ, with 8 GB RAM)). All of results compared algorithms came from PlatEMO [34], and MOEA/D-DOS was run on JMetal. Finally, we used UF [26], DTLZ [35] and WFG [36] as the test instances. All of the experimental results are shown on
Conclusion
In this paper, we have found the DE-SBX/either-or genetic operator, and based on it proposed DOS to dynamically optimized the Duty Ratio parameter . Finally we proposed the MOEA/D-DOS algorithm. This paper has focused on the co-evaluation ability of different genetic operators, DE-SBX/either used the co-evaluation ability of DE/rand/1 and SBX to generate offspring and obtained a good performance. We also proposed a DOS to improve the co-evaluation effect of DE-SBX/either-or. The experimental
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
The work is partially supported by the National Natural Science Foundation of China (Nos. U1836216, 61702310, 61772322), the major fundamental research project of Shandong, China (No. ZR2019ZD03), and the Taishan Scholar Project of Shandong, China .
References (36)
- et al.
Crowd evacuation simulation approach based on navigation knowledge and two-layer control mechanism
Inform. Sci.
(2018) - et al.
A path planning approach for crowd evacuation in buildings based on improved artificial bee colony algorithm
Appl. Soft Comput.
(2018) - et al.
Leader recommend operators selection strategy for a multiobjective evolutionary algorithm based on decomposition
Inform. Sci.
(2021) - et al.
An immune multi-objective optimization algorithm with differential evolution inspired recombination
Appl. Soft Comput.
(2015) - et al.
Adaptive composite operator selection and parameter control for multiobjective evolutionary algorithm
Inform. Sci.
(2016) - et al.
A fast hypervolume driven selection mechanism for many-objective optimisation problems
Swarm Evol. Comput.
(2017) - et al.
A many-objective population extremal optimization algorithm with an adaptive hybrid mutation operation
Inform. Sci.
(2019) - et al.
Preselection via classification: A case study on evolutionary multiobjective optimization
Inform. Sci.
(2018) - et al.
Hybrid artificial bee colony algorithm for a parallel batching distributed flow-shop problem with deteriorating jobs
IEEE Trans. Cybern.
(2020) Multi-Objective Optimization Using Evolutionary Algorithms, Vol. 16
(2001)
A hybrid multi-objective artificial bee colony algorithm for flexible task scheduling problems in cloud computing system
Cluster Comput.
Graph neural network encoding for community detection in attribute networks
IEEE Trans. Cybern.
Nonlinear Multiobjective Optimization, Vol. 12
A fast and elitist multiobjective genetic algorithm: NSGA-II
IEEE Trans. Evol. Comput.
MOEA/D: A multiobjective evolutionary algorithm based on decomposition
IEEE Trans. Evol. Comput.
The performance of a new version of MOEA/D on CEC09 unconstrained MOP test instances
Adaptive operator selection with bandits for a multiobjective evolutionary algorithm based on decomposition
IEEE Trans. Evol. Comput.
Covariance matrix adaptation pareto archived evolution strategy with hypervolume-sorted adaptive grid algorithm
Integr. Comput.-Aided Eng.
Cited by (7)
Redefined decision variable analysis method for large-scale optimization and its application to feature selection
2023, Swarm and Evolutionary Computation