Skip to main content
Log in

An efficient differential evolution using speeded-up k-nearest neighbor estimator

  • Methodologies and Application
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

In evolutionary algorithm, one of the main issues is how to reduce the number of fitness evaluations required to obtain optimal solutions. Generally a large number of evaluations are needed to find optimal solutions, which leads to an increase of computational time. Expensive cost may have to be paid for fitness evaluation as well. Differential evolution (DE), which is widely used in many applications due to its simplicity and good performance, also cannot escape from this problem. In order to solve this problem a fitness approximation model has been proposed so far, replacing real fitness function for evaluation. In fitness approximation, an ability to estimate accurate value with compact structure is needed for good performance. Therefore in this paper we propose an efficient differential evolution using fitness estimator. We choose k-nearest neighbor (kNN) as fitness estimator because it does not need any training period or complex computation. However too many training samples in the estimator may cause computational complexity to be exponentially high. Accordingly, two schemes with regard to accuracy and efficiency are proposed to improve the estimator. Our proposed algorithm is tested with various benchmark functions and shown to find good optimal solutions with less fitness evaluation and more compact size, compared with DE and DE-kNN.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Adra SF, Dodd TJ, Griffin IA, Fleming PJ (2009) Convergence acceleration operator for multiobjective optimization. IEEE Trans Evol Comput 13(4):825–847

    Article  Google Scholar 

  • Alba E, Tomassini M (2002) Parallelism and evolutionary algorithms. IEEE Trans Evol Comput 6(5):443–462

    Article  Google Scholar 

  • Alcalá-Fdez J, Sánchez L, García S, del Jesus MJ, Ventura S, Garrell JM, Otero J, Romero C, Bacardit J, Rivas VM, Fernández JC, Herrera F (2009) KEEL: a software tool to assess evolutionary algorithms for data mining problems. Soft Comput 13(3):307–318

    Article  Google Scholar 

  • Cabido R, Montemayor A, Pantrigo J (2012) High performance memetic algorithm particle filter for multiple object tracking on modern GPUs. Soft Comput 16(2):217–230

    Article  Google Scholar 

  • Chan TM, Leung KS, Lee KH (2012) Memetic algorithms for de novo motif discovery. IEEE Trans Evol Comput 16(5):730–748

    Article  Google Scholar 

  • Cruz-Ramírez M, Hervás-Martínez C, Gutiérrez P, Pérez-Ortiz M, Briceño J, de la Mata M (2013) Memetic pareto differential evolutionary neural network used to solve an unbalanced liver transplantation problem. Soft Comput 17(2):275–284

    Article  Google Scholar 

  • Hansen N, Ostermeier A (2001) Completely derandomized self-adaptation in evolution strategies. Evol Comput 9(2):159–195

    Article  Google Scholar 

  • Hu XM, Zhang J, Yu Y, Chung HH, Li YL, Shi YH, Luo XN (2010) Hybrid genetic algorithm using a forward encoding scheme for lifetime maximization of wireless sensor networks. IEEE Trans Evol Comput 14(5):766–781

    Article  Google Scholar 

  • Iacca G, Neri F, Mininno E, Ong YS, Lim MH (2012) Ockham’s razor in memetic computing: three stage optimal memetic exploration. Inf Sci 188:17–43

    Article  MathSciNet  Google Scholar 

  • Jin S (2007) An efficient evolutionary optimization with fitness approximation using neural networks. Master’s thesis, Korea Advanced Institute of Science and Technology

  • Jin Y (2005) A comprehensive survey of fitness approximation in evolutionary computation. Soft Comput 9(1):3–12

    Article  Google Scholar 

  • Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of the IEEE International Conference on Neural Networks, pp 1942–1948

  • Kumar S (2004) Neural networks: a classroom approach. Tata McGraw-Hill, New Delhi

  • Kuncheva LI (2000) Fuzzy classifier design. Physica Verlag, New York

  • Larrañaga P (2002) Estimation of distribution algorithms: a new tool for evolutionary computation, chap. A review on estimation of distribution algorithm. Kluwer Academic Publishers, Boston, pp 57–100

  • Liu Y, Sun F (2011) A fast differential evolution algorithm using k-nearest neighbour predictor. Expert Syst Appl 38(4):4254–4258

    Article  Google Scholar 

  • Llorà X, Sastry K, Goldberg DE, Gupta A, Lakshmi L (2005) Combating user fatigue in iGAs: partial ordering, support vector machines, and synthetic fitness. In: Proceedings of the 2005 conference on Genetic and evolutionary computation, pp 1363–1370

  • Masters T, Land W (1997) New training algorithm for the general regression neural network. In: Proceedings of the IEEE international conference on systems, man, and cybernetics, vol 3. Springer, pp 1990–1994

  • Meuth R, Lim MH, Ong YS, Wunsch II DC (2009) A proposition on memes and meta-memes in computing for higher-order learning. Memetic Comput 1(2):85–100

    Article  Google Scholar 

  • Michalewicz Z (1996) Genetic algorithms + data structures = evolution programs. Springer, London

  • Moscato P (1989) On evolution, search, optimization, genetic algorithms and martial arts—towards memetic algorithms. Caltech concurrent computation program, C3P Report, 826, Pasadena

  • Navot A, Shpigelman L, Tishby N, Vaadia E (2006) Nearest neighbor based feature selection for regression and its application to neural activity. Advances in Neural Information Processing Systems, NIPS, pp 995–1002

  • Neri F, Cotta C (2012) Memetic algorithms and memetic computing optimization: a literature review. Swarm Evol Comput 2:1–14

    Google Scholar 

  • Neri F, Cotta C, Moscato P (2012) Handbook of memetic algorithms. Springer, Berlin

  • Neri F, Tirronen V, Karkkainen T, Rossi T (2007) Fitness diversity based adaptation in multimeme algorithms: a comparative study. In: Proceedings of the IEEE congress on evolutionary computation, pp 2374–2381

  • Ong Y, Keane A (2004) Meta-Lamarckian learning in memetic algorithms. IEEE Trans Evol Comput 8(2):99–110

    Article  Google Scholar 

  • Ong YS, Lim M, Chen X (2010) Memetic computation—past, present & future. IEEE Comput Intell Mag 5(2):24–31

    Article  Google Scholar 

  • Price KV, Storn RM, Lampinen JA (2005) Differential evolution: a practical approach to global optimization. Springer, Berlin

  • Queipo NV, Haftka RT, Shyy W, Goel T, Vaidyanathan R, Tucker PK (2005) Surrogate-based analysis and optimization. Progress Aerospace Sci 41(1):1–28

    Article  Google Scholar 

  • Shi L, Rasheed K (2010) Computational intelligence in expensive optimization problems, chap. A survey of fitness approximation methods applied in evolutionary algorithms. Springer, Berlin, pp 3–28

  • Smith J (2007) Coevolving memetic algorithms: a review and progress report. IEEE Trans Systems Man Cybern Part B Cybern 37(1):6–17

    Article  Google Scholar 

  • Storn R, Price K (1997) Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11:341–359

    Article  MATH  MathSciNet  Google Scholar 

  • Tenne Y, Armfield SW (2009) A framework for memetic optimization using variable global and local surrogate models. Soft Comput 13(8-9):781–793

    Article  Google Scholar 

  • Whitty S (2005) A memetic paradigm of project management. Int J Project Manag 23(8):575–583

    Article  Google Scholar 

  • Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3(2):82–102

    Article  Google Scholar 

  • Zhang Q, Liu W, Tsang E, Virginas B (2010) Expensive multiobjective optimization by MOEA/D with Gaussian process model. IEEE Trans Evol Comput 14(3):456–474

    Article  Google Scholar 

  • Zhou Z, Ong YS, Lim MH, Lee BS (2007) Memetic algorithm using multi-surrogates for computationally expensive optimization problems. Soft Comput 11(10):957–971

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to So-Youn Park.

Additional information

Communicated by G. Acampora.

Appendices

Appendix A: Benchmark functions

f 1: Sphere model

$$ f_{1}(x) = \sum_{i=1}^N x_i^2, -100 \le x_i \le 100 $$
$$ {\rm min}(f_1) =f_1 (0,\ldots, 0)=0 $$

f 2: Schwefel’s problem 2.22

$$ f_2 (x) = \sum_{i=1}^N | x_i |+\prod_{i=1}^N |x_i |,\quad -10\le x_i \le 10 $$
$$ {\rm min}(f_2) =f_2 (0,\ldots, 0)=0 $$

f 3: Schwefel’s problem 1.2

$$ f_3 (x)= \sum_{i=1}^N \left( \sum_j^i x_j \right)^2 , \quad -100 \le x_i \le 100 $$
$$ {\rm min}(f_3) =f_3 (0,\ldots, 0)=0 $$

f 4: Schwefel’s problem 2.21

$$ f_4 (x)= \max_{i} \{|x_i |, 1\le i\le 30 \},\quad -100 \le x_i \le 100 $$
$$ {\rm min}(f_4) =f_4 (0,\ldots, 0)=0 $$

f 5: Generalized Rosenbrock’s function

$$ f_5 (x)= \sum_{i=1}^{N-1} [100(x_{i+1} -x_i^2 )^2 + (x_i -1)^2 ],\quad -30\le x_i \le 30 $$
$$ {\rm min}(f_5) =f_5 (1,\ldots, 1)=0 $$

f 6: Step function

$$ f_6 (x)= \sum_{i=1}^N ( \lfloor x_i +0.5 \rfloor )^2 ,\quad -100 \le x_i \le 100 $$
$$ {\rm min}(f_6) =f_6 (0,\ldots, 0)=0 $$

f 7: Quartic function i.e. noise

$$ f_7 (x)= \sum_{i=1}^N ix_i^4 + {\mathrm{random}}[0,1),\quad -1.28 \le\; {x_i} \;\le 1.28 $$
$$ {\rm min}(f_7) =f_7 (0,\ldots, 0)=0 $$

f 8: Generalized Schwefel’s problem 2.26

$$ f_8 (x)= -\sum_{i=1}^N \left(x_i \sin(\sqrt{|x_i |}) \right) ,\quad -500 \le x_i \le 500 $$
$$ {\rm min}(f_8) =f_8 (420.9687,\ldots, 420.9687)=-418.983N $$

f 9: Generalized Rastrigin’s function

$$ f_9 (x)= \sum_{i=1}^N [x_i^2 - 10\cos(2\pi x_i )+10] ,\quad -5.12 \le x_i \le 5.12 $$
$$ {\rm min}(f_9) =f_9 (0,\ldots, 0)=0 $$

f 10: Ackley’s function

$$ \begin{aligned} f_{10} &= -20{\mathrm{exp}}\left( -0.2 \sqrt{ \frac{1}{N} \sum_{i=1}^N x_i^2} \right)\\ &\quad -{\mathrm{exp}}\left( \frac{1}{N} \sum_{i=1}^N \cos 2\pi x_i \right)+20+e\\ \end{aligned} $$
$$ -32 \le x_i \le 32,\,{\mathrm{min}} (f_{10}) =f_{10} (0,\ldots, 0)=0 $$

f 11: Generalized Griewank function

$$ f_{11} (x)= \frac{1}{4,000} \sum_{i=1}^N x_i^2 - \prod_{i=1}^N \cos\left( \frac{x_i}{\sqrt{i}} \right) +1 $$
$$ -600\le x_i \le 600 ,\,{\mathrm{min}}(f_{11}) =f_{11} (0,\ldots, 0)=0 $$

f 12f 13: Generalized penalized functions

$$ \begin{aligned} f_{12} &= \frac{\pi}{N} \left\{ 10\sin^2 (\pi y_1 ) + \sum_{i=1}^{N-1} (y_i -1)^2 \right. \\ &\quad \left.\cdot [1+10\sin^{2} (\pi y_{i+1} )] + (y_N -1)^2 \right \} \\ & \quad+ \sum_{i=1}^N u(x_i ,10,100,4) \end{aligned} $$
$$ -50 \le x_i \le 50,\,{\mathrm{min}}(f_{12}) =f_{12} (-1,\ldots, -1)=0 $$
$$ \begin{aligned} f_{13} =\,& 0.1 \biggl\{ \sin^2 (3\pi x_1 ) + \sum_{i=1}^{N-1} (x_i -1)^2 [1+\sin^2 (3\pi x_{i+1})]\biggr.\\ & \qquad\biggl.+(x_N -1)^2 [1+\sin^2 (2\pi x_N )] \biggr\} \\ & + \sum_{i=1}^N u(x_i , 5,100,4) \\ \end{aligned} $$
$$ -50 \le x_i \le 50,\,{\mathrm{min}}(f_{13}) =f_{13} (1,\ldots, 1)=0 $$

where

$$ \begin{aligned} u(x_i , a,k,m) &= \left\{\begin{array}{ll} k (x_i -a)^m , & x_i >a \\ 0, & -a \le x_i \le a \\ k (-x_i -a)^m , & x_i < -a \end{array}\right. \\ y_i &= 1+ \frac{1}{4} (x_i +1) \\ \end{aligned} $$

f 14: Six-hump camel-back function

$$ f_{14} = 4x_1^2 -2.1x_1^4 + \frac{1}{3} x_1^6 +x_1 x_2 -4x_2^2+4x_2^4 $$
$$ -5 \le x_i \le 5 $$
$$ x_{\mathrm{min}} = (0.08983,\,-0.7126),\,(-0.08983,\,0.7126) $$
$$ {\mathrm{min}}(f_{14} )=-1.0316285 $$

f 15: Branin function

$$ f_{15} = \left( x_2 - \frac{5.1}{4\pi^2} x_1^2 + \frac{5}{\pi} x_1 -6 \right)^2 + 10\left(1- \frac{1}{8\pi}\right)\cos(x_1 ) + 10 $$
$$ -5\le x_1 \le 10,\,0\le x_2 \le 15 $$
$$ x_{\mathrm{min}} =(-3.142,\,12.275),\,(3.142,\,2.275),\,(9.425,\,2.425) $$
$$ {\mathrm{min}}(f_{15} )=0.398 $$

f 16: Goldstein-Price function

$$ \begin{aligned} f_{16} =& [1+(x_1 +x_2 +1)^2 (19-14x_1 +3x_1^2 -14x_2 \\ & +6x_1 x_2 +3x_2^2)]\times [30+(2x_1 -3x_2)^2 \\ & \times (18-32x_1 +12x_1^2 + 48x_2 -36x_1 x_2 +27x_2^2 )] \\ \end{aligned} $$
$$ -2\le x_i \le 2,\,{\mathrm{min}}(f_{16} ) = f_{16} (0, -1)=3 $$

f 17: Bukin function

$$ f_{17} = 100 \times \sqrt{|x_2 - 0.01 \times x_1^2 |} + 0.01 \times |x_1 + 10| $$
$$ -15 \le x_1 \le -5,\,-3\le x_2 \le 3 $$
$$ {\mathrm{min}}(f_{17})=f_{17} (-10,1)=0 .$$

Appendix B

See Tables 7, 8, 9 and 10.

Table 7 Mean and standard deviation of optimal value found by DE
Table 8 Mean and standard deviation of optimal value found by DE-EkNN, DE-kNN, DE-wkNNAll, and DE-okNNSel
Table 9 Mean and standard deviation of the number of fitness evaluations for DE-EkNN, DE-kNN, DE-wkNNAll, and DE-okNNSel
Table 10 Mean and standard deviation of the size of archive containing training samples for DE-EkNN, DE-kNN, DE-wkNNAll, and DE-okNNSel

Rights and permissions

Reprints and permissions

About this article

Cite this article

Park, SY., Lee, JJ. An efficient differential evolution using speeded-up k-nearest neighbor estimator. Soft Comput 18, 35–49 (2014). https://doi.org/10.1007/s00500-013-1030-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-013-1030-x

Keywords

Navigation