Elsevier

Knowledge-Based Systems

Volume 240, 15 March 2022, 108141
Knowledge-Based Systems

A dual-operator strategy for a multiobjective evolutionary algorithm based on decomposition

https://doi.org/10.1016/j.knosys.2022.108141Get rights and content

Abstract

Evolutionary Algorithms (EAs) are a kind of population based on optimization method by adopting survival of the fittest rules. The performance of EAs can be greatly improved by appropriate genetic operators, so how to select an appropriate genetic operator is a key issue. In order to solve this problem, some genetic operators are mixed to use with a certain probability to improve their spatial search capabilities. However, it is difficult to solve most complex multi-objective problems (MOPs) based on a certain probability value. In this paper, under the concept of co-evolution and Duty Ratio, we built a genetic operator based on Differential Evolution (DE) and Simulated Binary Crossover (SBX), and the adjustment of Duty Ratio parameter is learned based on the historical used times of DE and SBX. Under the framework of Multi-objective Evolutionary Algorithm Based on Decomposition (MOEA/D), we propose our dual-operator strategy (DOS) based on learning strategy, namely MOEA/D-DOS. We compared MOEA/D-DOS with other six versions of multi-objective EAs, and the final result showed that MOEA/D-DOS has achieved the better results.

Introduction

A multi-objective optimization problem (MOP) has several objectives to be optimized [1], [2], [3]. An MOP can be stated as follows [4], [5], [6]: minimizeF(x)=(f1(x),,fm(x))TsubjecttoxΩwhere ΩRn is the decision variable vector and x=(x1,,xn)TΩ is a solution. F:ΩRm constitutes m objective functions.

Let x1 and x2 Rn, if and only if F(x1)F(x2) for every i(1,,m) and F(x1)>F(x2) for at least one index i(1,,m), x1 is said to dominate x2. A solution x is the Pareto optimal point if no other xΩ can dominate x. The set of all Pareto optimal points is called the Pareto set (PS) . Accordingly, PF={F(x)Rn|xPS}, is called the Pareto front (PF) [7].

Many multi-objective optimization evolutionary algorithms (MOEAs) have been proposed in the last two decades. Most MOEAs – for example, NSGA-II [8] – are based on Pareto dominance to search the PS. In contrast, a multi-objective evolutionary algorithm based on decomposition (MOEA/D) [9] transforms an MOP into many single objective optimization subproblems and finds a set of solutions to approximate the PF in one single run. In the years since the MOEA/D was proposed, some variants based on MOEA/D have been proposed to solve complex MOP problems, such as MOEA/D-DRA [10] and MOEA/D-FRRMAB [11].

Typically, effective MOEAs have both good convergence ability and diversity ability. In evolutionary algorithms (EAs), a genetic operator often plays an important role in population evolution [12]. Generally, an EA only uses one genetic operator to generate offspring. In fact, the ability of a single genetic operator is often limited, so different MOPs are suitable for different genetic operators, and even different genetic operators can be adopted for different stages of the same problem. Therefore, fusing different operators has a good effect on complex MOP problems. Using different genetic operators generates better offspring to improve the population’s convergence, thereby improving the MOEAs’ performance.

Li et al. proposed the MOEA/D-FRRMAB, which combines multiple genetic operators into one framework and chooses the appropriate genetic operator at the appropriate time, thereby enhancing the MOEA’s effectiveness. Qi et al. [13] combined Differential Evolution (DE) and Simulated Binary Crossover (SBX) and randomly determined which genetic operator to use for a test instance. Lin et al. [14] proposed an adaptive method for selecting parameters and genetic operators based on the upper confidence bound (UCB) and prior parameter knowledge. Rostami et al. also achieved many excellent MOEAs when solving many-objective optimization problems [15], [16], [17]. Min-Rong Chen et al. [18] designed an adaptive hybrid genetic operation to update the population.

In recent years, many people have propose EAs based on machine learning. Lin et al.’s [19] proposed algorithm builds a classification model on the search space to filter all new generated solutions. Pan et al. [20] proposed a surrogate-assisted many-objective evolutionary algorithm that uses an artificial neural network to predict the dominance relationship between candidate solutions and reference solutions, instead of approximating the objective values separately. Zhang et al. [21] proposed a classification based pre-selection (CPS) strategy for evolutionary multi-objective optimization. The strategy uses two population labeled ‘good’ and ‘bad’ to trains a classifier, which is used to determine the quality of offspring. Machine-learning models have achieved great success in pre-selecting offspring, but it takes a long time to train the data set. Such EAs are more suitable for dealing with expensive problems and are unsuitable for solving problems that require a short time to obtain the calculation results.

This paper, under the MOEA/D framework, discusses whether co-evolution exists in both space-search strategies, and we propose a genetic operator based on the concept of co-evolution. In order to enhance operator’s space-search and co-evolution capabilities, we propose an adaptive parameter-selection strategy based on a learning strategy. In fact, we did not directly use machine learning to train the model, because the process of training machine learning itself takes a lot of long time.

The learning rate is an important concept in machine learning. Take the example of a gradient descent, which is a parameter optimization algorithm widely used to minimize model errors. In order to make improve the gradient descent method’s performance, the value of the learning rate must be set at an appropriate range. The learning rate determines how fast the parameter moves to the optimal value. If the learning rate is too high, then it is likely to exceed the optimal value. If the learning rate is too low, then the optimization efficiency may be too low, and the algorithm cannot converge for a long time. Thus, the learning rate is very important to an algorithm’s performance. In MOEA/D-DOS, first, we propose a genetic operator DE-SBX/either-or based on the concept of co-evolution and Duty Ratio. It uses the Duty Ratio parameter to decide the proportions of DE and SBX to be used in a current period, so the performance of DE-SBX/either-or is greatly affected by the Duty Ratio parameter. To improve the performance of DE-SBX/either-or, we use a parameter-adjustment method based on a learning strategy to find the best Duty Ratio parameter for the current problem. We performed some comparative experiments to verify the effectiveness of co-evolution and DOS. First, we found that DE-SBX/either-or showed the better results on a series of UF test instances than the DE and SBX did, indicating that the genetic operator based on co-evolution is effective. Afterwards, we determined that the DE-SBX/either-or with DOS strategy showed better results on a series of UF test instances than the DE, SBX, and DE-SBX/either-or methods, thus showing the effectiveness of the DOS strategy. Finally, we selected six sets of algorithms to compare with MOEA/D-DOS. The experimental results showed that MOEA/D-DOS outperformed than the other EAs used in our experimental studies.

Section snippets

DE-SBX/either-or and DOS

In this section, we propose a genetic operator, namely DE-SBX/either-or, which is not only very stable, but also has a fast search speed. We propose DOS based on DE-SBX/either-or and DOS involving adjusting the Duty Ratio parameter in DE-SBX/either to improve its performance.

Algorithm framework of MOEA/D-DOS

Based on the DOS, we propose MOEA/D-DOS based on the MOEA/D framework. The pseudo-code of MOEA/D-DOS is shown in Algorithm 3.

Comparison study

In this section, we study the performance of the DE-SBX/ either-or and DOS. We used MOEA/D in [9] as the algorithm framework to test the performance of DE-SBX/either-or and DOS. The parameters of MOEA/D are the same as in [9], and the parameters of DE, SBX and DE-SBX/either-or are same as follows:

(1) DE/rand/1: CR=1, F=0.5

(2) SBX: η=20

(3) DE-SBX/either-or: CR=1, F=0.5, η=20, P=0.5

Experimental setting

For this section, we have compared MOEA/D-DOS with MOEA/ D-FRRMAB [11], GrEA [29], SPEA2SDE [30], IMMOEA [31], RMMEDA [32], and CAMOEA [33] to verify the performance of MOEA/D-DOS. All of the test instances were run on identical computers (i7-6700 processor, at 3.4 GHZ, with 8 GB RAM)). All of results compared algorithms came from PlatEMO [34], and MOEA/D-DOS was run on JMetal. Finally, we used UF [26], DTLZ [35] and WFG [36] as the test instances. All of the experimental results are shown on

Conclusion

In this paper, we have found the DE-SBX/either-or genetic operator, and based on it proposed DOS to dynamically optimized the Duty Ratio parameter P. Finally we proposed the MOEA/D-DOS algorithm. This paper has focused on the co-evaluation ability of different genetic operators, DE-SBX/either used the co-evaluation ability of DE/rand/1 and SBX to generate offspring and obtained a good performance. We also proposed a DOS to improve the co-evaluation effect of DE-SBX/either-or. The experimental

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

The work is partially supported by the National Natural Science Foundation of China (Nos. U1836216, 61702310, 61772322), the major fundamental research project of Shandong, China (No. ZR2019ZD03), and the Taishan Scholar Project of Shandong, China .

References (36)

  • LiJ. et al.

    A hybrid multi-objective artificial bee colony algorithm for flexible task scheduling problems in cloud computing system

    Cluster Comput.

    (2019)
  • SunJ. et al.

    Graph neural network encoding for community detection in attribute networks

    IEEE Trans. Cybern.

    (2021)
  • MiettinenK.

    Nonlinear Multiobjective Optimization, Vol. 12

    (2012)
  • DebK. et al.

    A fast and elitist multiobjective genetic algorithm: NSGA-II

    IEEE Trans. Evol. Comput.

    (2002)
  • ZhangQ. et al.

    MOEA/D: A multiobjective evolutionary algorithm based on decomposition

    IEEE Trans. Evol. Comput.

    (2007)
  • ZhangQ. et al.

    The performance of a new version of MOEA/D on CEC09 unconstrained MOP test instances

  • LiK. et al.

    Adaptive operator selection with bandits for a multiobjective evolutionary algorithm based on decomposition

    IEEE Trans. Evol. Comput.

    (2013)
  • RostamiS. et al.

    Covariance matrix adaptation pareto archived evolution strategy with hypervolume-sorted adaptive grid algorithm

    Integr. Comput.-Aided Eng.

    (2016)
  • Cited by (7)

    View all citing articles on Scopus
    View full text