A surrogate-assisted Jaya algorithm based on optimal directional guidance and historical learning mechanism
Introduction
Real value optimization problems widely exist in engineering design and scientific research (Liang et al., 2020, Zhang et al., 2021a). In real life, real value optimization problems often have the characteristics of nonlinearity, non differentiability, discontinuity, and multipolarity, so the traditional optimization algorithms are not applicable, such as gradient descent algorithm (Zhang et al., 2021b). Unlike traditional optimization algorithms, evolutionary algorithms have the advantages of global search ability, not limiting to any objective function form, black box problem optimization ability, parallelization, and so on. The above advantages of the evolutionary algorithm make it widely used and have become one of the most popular optimization technologies.
The classical evolutionary algorithms are proposed and researched by scholars. The differential evolution (DE) (Bilal Pant et al., 2020) is composed of mutation, crossover, and selection operations to improve the global search capability of the algorithm. The speed and position of each individual are utilized to control the evolutionary step and direction to close to the most preeminent individual in the particle swarm optimization (PSO) (Tang et al., 2015). The trade-off of exploitation and exploration in the biogeography-based optimization (BBO) algorithm (Bhattacharya and Chattopadhyay, 2010, Deng et al., 2019, Simon, 2008) is adjusted by biological migration and mutation operations. The information of the nearest neighbor is learned by the individuals to improve the optimization ability in the whale optimization algorithm (WOA) (Long et al., 2020). The desirable information from the optimal solution is spread in the water wave optimization (WWO) (Zhao et al., 2019) by propagation, refraction, breaking operations during the search process. The k-means method is utilized to cluster the population, and the optimal individual in one class is replaced randomly by the other optimal solution to ensure the entire search space searched in the brainstorm optimization (BSO) algorithm (Zhao et al., 2021a). A hierarchical knowledge guided backtracking search algorithm (HKBSA) (Zhao et al., 2021b) is presented according to multi-population strategy and multi-strategy mutation to discover the optimal solution. However, evolutionary algorithms still face challenges in practical applications, which include challenges from computationally expensive optimization problems.
Computationally expensive optimization problems are often characterized by a high degree of non-linearity and the absence of an explicit objective function. The natural selection based on individual fitness value makes the evolutionary algorithm no longer need gradient information from the problem for evolutionary algorithms to efficiently solve optimization problems with characteristics such as black boxes, non-linearity, non-differentiability, and discontinuity. For this reason, evolutionary algorithms are popular in application areas which involve computationally expensive problems. In recent years, surrogate-assisted evolutionary algorithms (SAEAs) are generally utilized to optimize engineering optimization problems that have computationally expensive simulations (Cai et al., 2020, Wang et al., 2020, Zhao et al., 2020b). SAEAs overcome the deficiencies of traditional mathematical methods in settling the engineering optimization problems without a precise mathematical model. As compared to traditional evolutionary algorithms, SAEAs reduce the problems of high computational simulation cost and are long time-consuming in solving optimization problems. The main purpose of SAEAs is to use computationally inexpensive surrogate models to evaluate individuals and replace real fitness assessments. Thus, the better solution with limited computational resources is allowed to be found by evolutionary algorithms.
The Jaya algorithm, which is an algorithm-specific parameter-less evolutionary algorithm, is proposed by Venkata Rao (2016) to address engineering optimization problems. As one of the new swarm intelligence optimization algorithms, the Jaya algorithm has the characteristics of easy implementation and few specific parameters. The operation mechanism of the Jaya algorithm is to guide each individual close to the best individual and guide each individual away from the worst individual. The offspring population is produced after individual renewal. The fitness function values of parent and offspring individuals are compared, and the individuals with better fitness values in parent and offspring are selected to form the next generation population. In the Jaya algorithm, the offspring will be close to the best individual of each generation and away from the worst individual of each generation. As a part of the evolutionary algorithm, the Jaya algorithm requires a lot of evaluation time in solving optimization problems. In the current research literature, there is no research on how to decrease the evaluation time used by the Jaya algorithm in solving optimization problems, and this research direction is vital in practical engineering applications.
A surrogate model combined polynomial model and radial basis model is studied and a new variant of Jaya algorithm (SDH-Jaya) is proposed in this study. A surrogate-assisted model, which is constructed by dispersed points, uncertain points, and the optimal point, is introduced to save computational resources in the SDH-Jaya. Besides, two co-evolutionary mechanisms are proposed to improve the accuracy of the surrogate model and accelerate the evolution of the population. The optimal directional guidance strategy is introduced to adjust adaptively the search direction and search step. The historical learning mechanism is utilized to balance the exploitation and exploration ability of the SDH-Jaya. The main contributions of this study are summarized as follows.
- •
Two co-evolutionary mechanisms are proposed to enhance the accuracy of the SDH-Jaya. In the assisted co-evolutionary mechanism, the evolution of the population is assisted by the surrogate model and an optimal individual is returned to provide a new search area. The reward and punishment rules are designed based on the calculation speed, fitness evaluations, and the performance of the Jaya algorithm in the self-learning co-evolutionary mechanism.
- •
A new evolutionary strategy based on optimal directional guidance and historical learning is provided to improve the performance of the classical Jaya algorithm. The difference vectors between the best solution and worst solution are proposed to guide the mutation of candidates. The historical population stored in an archive is utilized randomly to improve the diversity of the population.
The remainder of this work is organized as follows. Literature about the Jaya algorithm and surrogate model is reviewed in Section 2. The proposed SDH-Jaya is introduced in Section 3. The experimental result analysis and discussion are presented in Section 4 and Section 5. Continuous engineering optimization problems and scheduling problems are utilized to test the performance of the SDH-Jaya in Section 6. Finally, conclusions are summarized and future work is suggested in Section 7.
Section snippets
Jaya algorithm
The Jaya algorithm is an algorithm-specific parameter-less evolutionary algorithm. Apart from population size and termination conditions, no algorithm-specific parameters are set in advance. The main idea of the Jaya algorithm is that all individuals keep moving closer to the optimal individual and away from the worst individual in each generation. Research on the Jaya algorithm has concentrated on two main areas, namely research on the improvement of the algorithm and research on the
Jaya algorithm
Assuming that is the function to be optimized. The iteration is (i.e. ), the population size is (i.e. ), the number of design variables is (i.e. ). First of all, initial solutions are initialized randomly in . and are the lower bound and upper bound of the design variable , respectively. Then the Eq. (1) is utilized to generate offspring individuals according to parent individuals. In Eq. (1), the term “”
The numerical results of the DH-Jaya and SDH-Jaya
The numerical results of the DH-Jaya and SDH-Jaya are analyzed in this section on CEC2017 benchmark problems (Wu et al., 2016). The CEC2017 benchmark function consists of four different categorizations: are unimodal functions, are multimodal functions, are hybrid functions, and are composition functions. has been excluded because it shows unstable behavior especially for higher dimensions and significant performance variations for the identical algorithm
CEC 2017 benchmark test suite experiment
In this section, the basic Jaya algorithm (Venkata Rao, 2016), QO-Jaya (Warid et al., 2018), SAMP-Jaya (Venkata (Venkata Rao and Saroj, 2017), and state-of-the-art algorithms such as invasive weed optimization (IWO) (Mehrabian and Lucas, 2006), proactive particles in swarm optimization (PPSO) (Tangherloni et al., 2017), teaching-learning based optimization with focused learning (TLBO-FL) (Kommadath et al., 2017), dynamic Yin–Yang pair optimization (DYYPO) (Maharana et al., 2017), orthogonal
SDH-Jaya and DH-Jaya for engineering problems
In this section, two categories of engineering optimization problems, the continuous engineering optimization problem and the discrete engineering optimization problem, are utilized to test the performance of the SDH-Jaya. In addition, the performance of the SDH-Jaya and DH-Jaya are compared with classical methods to analyze the advantages of the SDH-Jaya and DH-Jaya in addressing engineering optimization problems. For each engineering problem, the algorithm is run independently 10 times.
Conclusions and future work
The surrogate-assisted Jaya algorithm, which is based on optimal directional guidance and historical learning mechanism (SDH-Jaya), is introduced to improve the performance of the classical Jaya algorithm for optimization problems in this paper. Optimal directional guidance, historical learning mechanism, and an ensemble surrogate model are introduced. In addition, the assisted co-evolutionary mechanism and the self-learning co-evolutionary mechanism are proposed. The experimental results show
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
This work was financially supported by the National Natural Science Foundation of China under grant 62063021. It was also supported by the Key talent project of Gansu Province (ZZ2021G50700016), the Key Research Programs of Science and Technology Commission Foundation of Gansu Province (21YF5WA086), Lanzhou Science Bureau project (2018-rc-98), and Project of Gansu Natural Science Foundation (21JR7RA204), respectively. All authors approved the version of the manuscript to be published.
References (59)
- et al.
Radial basis functions with a priori bias as surrogate models:A comparative study
Eng. Appl. Artif. Intell.
(2018) - et al.
Surrogate-guided differential evolution algorithm for high dimensional expensive problems
Swarm Evol. Comput
(2019) - et al.
Semi-supervised support vector regression based on data similarity and its application to rock-mechanics parameters estimation
Eng. Appl. Artif. Intell.
(2021) Surrogate-assisted evolutionary computation: Recent advances and future challenges
Swarm Evol. Comput
(2011)- et al.
Minimax and maximin distance designs
J. Statist. Plann. Inference
(1990) - et al.
Refraction-learning-based whale optimization algorithm for high-dimensional problems and parameter estimation of PV model
Eng. Appl. Artif. Intell.
(2020) - et al.
A novel numerical optimization algorithm inspired from weed colonization
Ecol. Inform
(2006) - et al.
Design optimization and analysis of selected thermal devices using self-adaptive Jaya algorithm
Energy Convers. Manag
(2017) - et al.
Synthesis of linear antenna arrays using jaya, self -adaptive jaya and chaotic jaya algorithms
Aeu-International J. Electron. Commun
(2018) - et al.
A travelling salesman approach to solve the F/no-idle/Cmax problem
Eur. J. Oper. Res
(2005)