A fast differential evolution algorithm using k-Nearest Neighbour predictor

https://doi.org/10.1016/j.eswa.2010.09.092Get rights and content

Abstract

Genetic algorithms (GAs), particle swarm optimisation (PSO) and differential evolution (DE) have proven to be successful in engineering optimisation problems. The limitation of using these tools is their expensive computational requirement. The optimisation process usually needs to run the numerical model and evaluate the objective function thousands of times before converging to an acceptable solution. However, in real world applications, there is simply not enough time and resources to perform such a huge number of model runs. In this study, a computational framework, known as DE-kNN, is presented for solving computationally expensive optimisation problems. The concept of DE-kNN will be demonstrated via one novel approximate model using k-Nearest Neighbour (kNN) predictor. We describe the performance of DE and DE-kNN when applied to the optimisation of a test function. The simulation results suggest that the proposed optimisation framework is able to achieve good solutions as well as provide considerable savings of the function calls compared to DE algorithm.

Research highlights

► A computational framework, known as DE-kNN (Differential Evolution – k-Nearest Neighbour), is presented for solving computationally expensive optimisation problems. ► The proposed method investigates the use of kNN predictor in conjunction with DE in order to accelerate the search process. ► The dynamic learning approach can guarantee the DE to converge towards to global optima with less actual evaluations without compromising the good search capabilities of DE. ► The simulation results suggest that the proposed optimisation framework is able to achieve good solutions as well as provide considerable savings of the function calls compared to DE algorithm.

Introduction

Population-based search strategies such as genetic algorithms (Holland, 1975), particle swarm optimisation (Kennedy & Eberhart, 1995) and differential evolution (Storn & Price, 1995) have found a wide application in various fields of engineering (Liu, 2009). One of the biggest drawbacks of using these population-based optimisation methods for engineering applications is that they require a large number of model evaluations. For example, Madsen (2000) reported that 10,000 model runs was needed in order to calibrate the MIKE 11/NAM rainfall–runoff model with nine parameters. For many real-world engineering applications, the number of calls of running a simulation model is very limited. This is especially true when each evaluation of the quality of solution is very time consuming, generally constrained by the time needed to run simulation models. For example, each simulation of a coarse hydrodynamic 2D model, may take up to 1 min to run on a high performance computer. As the spatial and temporal resolution increases, the simulation time increases. One can expect the simulation time to increase to several minutes or even hours for full hydrodynamic 3D models. Reducing the number of actual evaluations necessary to reach an acceptable solution is thus of major importance. Although the use of parallel computing is a remedy in reducing the computing time required for complex problems, an alternative approach using meta-models has received much attention recently (Jin, 2005, Khu et al., 2004, Liu et al., 2004, Liu and Khu, 2007, Yan and Minsker, 2003). Meta-models can be viewed as approximations of the original models and may be used in place of the original models to reduce the computational time. Since then, there have been numerous developments in the field of meta-models, the most prominent are surrogates of time consuming simulation models. They have been applied to design evaluation and optimisation in many engineering applications.

Jin (2005) presented a complete survey of the research on the use of fitness approximation in evolutionary computation. Jin (2005) has classified meta-models which are used currently in four categories, namely polynomial models, kriging models, neural networks and support vector machine. Main issues like approximation levels, approximate model management schemes and model construction techniques were reviewed. He stated it was difficult to construct a meta-model that was globally correct due to the high dimensionality, ill distribution and limited number of training samples. It has also been suggested that an appropriate procedure must be set up to combine meta-models with original numerical model, a technique known as evolution control or model management. Jin (2005) referred to an individual that was evaluated using the original fitness function as a controlled individual, and a generation in which all its individuals were evaluated using the original fitness function as a controlled generation. Jin (2005) defined two evolution control approaches: in individuals-based evolution control a fraction of each population is evaluated by the true fitness and the remainder of the population is evaluated by the meta-model, while in generation-based evolution control the entire population was either evaluated by the real function or evaluated by the meta-model. In individual-based evolution control, either a random strategy or a best strategy can be used to select the individuals to be controlled (Jin, 2005). In the best strategy, the best individual (based on the ranking evaluated by the meta-model) in the current generation is reevaluated using the original function, while in the random strategy, the individuals to be controlled are selected randomly. It has been shown that the best strategy can reduce the computational cost further and individual-based evolution control can be carried out only in a selected number of generations (Jin, 2005). For both control methods, there is disagreement about how many individuals of a population need to be controlled. The idea of taking less model evaluations in order to reach the optimal solution using fitness estimation with help of radial basis function (RBF) neural network is investigated by Khu et al. (2004) claimed to reduce the exact evaluations about 66.5% for a rainfall–runoff model calibration problem. In the work by Jin, Olhofer, and Sendhoff (2000), the convergence property of an evolution strategy (ES) with multilayer perceptron (MLP) neural network based fitness evaluation was investigated. Yan and Minsker (2003) have proposed a dynamic meta-modelling approach, in which artificial neural networks (ANN) and support vector machines (SVM) were embedded into a genetic algorithm optimisation framework to replace time consuming flow and contaminant transport models.

Therefore, time-efficient approximate model can be particularly beneficial for the expensive function evaluation. The proposed method in this paper investigates the use of kNN predictor in conjunction with DE in order to accelerate the search process. We have named this new technique DE-kNN. The proposed framework uses computationally cheap predictor constructed through dynamic learning to reduce the exact computationally expensive evaluation calls during DE search. The algorithm reported here can serve as a benchmark algorithm aimed at those types of expensive optimisation problems. This approach can substantially reduce the number of function evaluations on computationally expensive problems without compromising the good search capabilities of DE.

The DE and DE-kNN methods in this paper are used to investigate the optimisation of a test function. In Section 2, the differential evolution algorithm is briefly described. In Section 3, the proposed optimisation algorithm using k-Nearest Neighbour predictor for solving the expensive optimisation problem is presented. In Section 4, the test example is presented that illustrates the principles and implications of the proposed optimisation algorithm. Finally, conclusions are given in Section 5.

Section snippets

Differential evolution algorithm

Differential evolution (DE) is a population-based direct-search algorithm for global optimisation (Storn & Price, 1995). The standard DE works as follows: for each vector xi,G, i = 1, 2,  , NP, a trail vector vi,G+1 is generated according toVi,G+1=xr1,G+F·(xr2,G-xr3,G),with r1, r2, r3 ϵ [0, NP], integer and mutually different, F > 0, and r1  r2  r3  i. F is a real and constant factor which controls the amplification of the differential variation (xr2,G-xr3,G).

In order to increase the diversity of the

k-Nearest Neighbour algorithm

There are many nonlinear predictors such as artificial neural networks (ANNs), support vector machine (SVM) and kNN (Bishop, 1995, Vapnik, 1998). An ANN has many neurons and each neuron accepts input and gives output according its activation function. ANN can learn the mapping relationship between the inputs and the outputs sampled from a training set using a supervised learning algorithm. Then the trained ANN is used to make a prediction for test data. SVM uses a Lagrangian formulation to

Bukin test function

Bukin function is almost fractal (with fine seesaw edges) in the surroundings of their minimal points (see Fig. 2). Due to this property, they are extremely difficult to optimize by any method of global (or local) optimisation (Sudhanshu, 2006). In the search domain x1 ϵ [−15, −5], x2 ϵ [−3, 3], the function is defined as follows:f(x)=100×|x2-0.01×x12|+0.01×|x1+10|,fmin(-10,1)=0.

Experimental setup and results

The relevant experiment parameters using the DE and DE-kNN for the test function are listed in Table 1, Table 2. For each

Conclusions

In real-world optimisation problems, the function evaluations usually require a large amount of computation time. In this paper, we presented a hybrid optimisation framework (DE-kNN) that has been developed by combining an approximate technique called kNN predictor which can produce fairly accurate global approximations to actual parameters space to provide the function evaluations efficiently. The two optimisation algorithms have been used for optimisation of the test function. It is clear

Acknowledgement

The author would like to thank ABP Marine Research Ltd., UK for funding the work.

References (15)

There are more references available in the full text version of this article.

Cited by (34)

  • Learning enhanced differential evolution for tracking optimal decisions in dynamic power systems

    2018, Applied Soft Computing Journal
    Citation Excerpt :

    Their objective is to enhance the efficiency of NNC, while the objective of our paper is to enhance the efficiency of DE. Both Reference [25] and [26] uses KNN as a fitness predictor to save expensive fitness function calls and thus enhance DE for STATIC optimization problems. In our paper, LEDE is proposed for DYNAMIC OPF problems, and KNN is used to retrieve previous solutions which could be perform well in the current state of the DYNAMIC power systems, which is a new method.

View all citing articles on Scopus
View full text