Skip to main content

Advertisement

Log in

Evaluating a local genetic algorithm as context-independent local search operator for metaheuristics

  • Original Paper
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Local genetic algorithms have been designed with the aim of providing effective intensification. One of their most outstanding features is that they may help classical local search-based metaheuristics to improve their behavior. This paper focuses on experimentally investigating the role of a recent approach, the binary-coded local genetic algorithm (BLGA), as context-independent local search operator for three local search-based metaheuristics: random multi-start local search, iterated local search, and variable neighborhood search. These general-purpose models treat the objective function as a black box, allowing the search process to be context-independent. The results show that BLGA may provide an effective and efficient intensification, not only allowing these three metaheuristics to be enhanced, but also predicting successful applications in other local search-based algorithms. In addition, the empirical results reported here reveal relevant insights on the behavior of classical local search methods when they are performed as context-independent optimizers in these three well-known metaheuristics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Notes

  1. http://cs.gmu.edu/eclab/projects/ecj/.

References

  • Alcalá-Fdez J, Sánchez L, García S, del Jesus MJ, Ventura S, Garrel JM, Otero J, Romero C, Bacardit J, Rivas VM, Fernández JC, Herrera F (2009) KEEL: a software tool to assess evolutionary algorithms for data mining problems. Soft Comput 13(3):307–318

    Article  Google Scholar 

  • Auger A, Hansen N (2005) Performance evaluation of an advanced local search evolutionary algorithm. In: Corne D, Michalewicz Z, McKay B, Eiben G, Fogel D, Fonseca C, Greenwood G, Raidl G, Tan KC, Zalzala A (eds) Proceedings of the IEEE international conference on evolutionary computation, vol 2. IEEE, New York , pp 1777–1784

    Google Scholar 

  • Beasley JE (1990) OR-library: distributing test problems by electronic mail. J Oper Res Soc 41(11):1069–1072. http://people.brunel.ac.uk/mastjjb/jeb/info.html

    Google Scholar 

  • Beasley JE (1998) Heuristic algorithms for the unconstrained binary quadratic programming problem. Technical report, The Management School, Imperial College

  • Blum C (2002) ACO applied to group shop scheduling: a case study on intensification and diversification. In: Dorigo M, Di Caro G, Sampels M (eds) ANTS. LNCS, vol 2463. Springer, Heidelberg, pp 14–27

  • Blum C, Roli A (2003) Metaheuristics in combinatorial optimization: overview and conceptual comparison. ACM Comput Surv 35(3):268–308

    Article  Google Scholar 

  • Boender CGE, Rinnooy-Kan AHG, Stougie L, Timmer GT (1982) A stochastic method for global optimization. Math Program 22:125–140

    Article  MATH  MathSciNet  Google Scholar 

  • Boros E, Hammer PL, Tavares G (2007) Local search heuristics for quadratic unconstrained binary optimization (QUBO). J Heuristics 13(2):99–132

    Article  Google Scholar 

  • Brimberg J, Mladenović N, Urošević D (2008) Local and variable neighborhood search for the k-cardinality subgraph problem. J Heuristics 14(5):501–517

    Article  MATH  Google Scholar 

  • Campos V, Laguna M, Martí R (2005) Context-independent scatter and tabu search for permutation problems. INFORMS J Comput 17(1):111–122

    Article  MathSciNet  Google Scholar 

  • Chelouah R, Siarry P (2003) Genetic and Nelder-Mead algorithms hybridized for a more accurate global optimization of continuous multiminima functions. Eur J Oper Res 148(2):335–348

    Article  MATH  MathSciNet  Google Scholar 

  • Davis L (1991) Bit-climbing, representational bias, and test suite design. In: Belew R, Booker LB (eds) Proceedings of the international conference on genetic algorithms. Morgan Kaufmann, Menlo Park, pp 18–23

    Google Scholar 

  • De Jong K, Potter MA, Spears WM (1997) Using problem generators to explore the effects of epistasis. In: Bäck T (ed) Proceedings of the international conference on genetic algorithms. Morgan Kaufmann, Menlo Park, pp 338–345

    Google Scholar 

  • Dorigo M, Stützle T (2004) Ant colony optimization. MIT, Cambridge

    MATH  Google Scholar 

  • Dunham B, Fridshal D, Fridshal R, North JH (1963) Design by natural selection. Synthese 15(1):254–259

    Article  Google Scholar 

  • Fernandes C, Rosa A (2001) A study on non-random mating and varying population size in genetic algorithms using a royal road function. In: Proceedings of the congress on evolutionary computation. IEEE, New York, pp 60–66

  • Fernandes C, Rosa AC (2008) Self-adjusting the intensity of assortative mating in genetic algorithms. Soft Comput 12(10):955–979

    Article  Google Scholar 

  • Fournier NG (2007) Modelling the dynamics of stochastic local search on k-sat. J Heuristics 13(6):587–639

    Article  MATH  Google Scholar 

  • Garcia S, Molina D, Lozano M, Herrera F (2008) A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: A case study on the CEC’2005 special session on real parameter optimization. J Heuristics. doi:10.1007/s10732-008-9080-4

  • Garcia S, Fernández A, Luengo J, Herrera F (2009) A study of statistical techniques and performance measures for genetics-based machine learning: accuracy and interpretability. Soft Comput 13(10):959–977

    Article  Google Scholar 

  • García-Martínez C, Lozano M (2008) Local search based on genetic algorithms. In: Siarry P, Michalewicz Z (eds) Advances in metaheuristics for hard optimization. Natural computing. Springer, Heidelberg, pp 199–221

    Chapter  Google Scholar 

  • García-Martínez C, Lozano M, Molina D (2006) A local genetic algorithm for binary-coded problems. In: Runarsson TP, Beyer H-G, Burke E, Merelo-Guervós JJ, Whitley LD, Yao X (eds) Proceedings of the international conference on parallel problem solving from nature. LNCS, vol 4193. Springer, Heidelberg, pp 192–201

  • García-Martínez C, Lozano M, Herrera F, Molina D, Sánchez AM (2008) Global and local real-coded genetic algorithms based on parent-centric crossover operators. Eur J Oper Res 185(3):1088–1113

    Article  MATH  Google Scholar 

  • Glover F, Kochenberger G (eds) (2003) Handbook of metaheuristics. Kluwer, Dordrecht

    MATH  Google Scholar 

  • Goldberg DE (1989) Genetic algorithms in search, optimization and machine learning. Addison-Wesley/Longman, Menlo Park/London

    MATH  Google Scholar 

  • Goldberg DE, Korb B, Deb K (1989) Messy genetic algorithms: motivation, analysis, and first results. Complex Syst 3:493–530

    MATH  MathSciNet  Google Scholar 

  • Gortazar F, Duarte A, Laguna M, Martí R (2008) Context-independent scatter search for binary problems. Technical report, Colorado LEEDS School of Business, University of Colorado at Boulder

  • Hansen P, Mladenović N (2002) Variable neighborhood search. In: Glover F, Kochenberger G (eds) Handbook of metaheuristics. Kluwer, Dordrecht, pp 145–184

    Google Scholar 

  • Harada K, Ikeda K, Kobayashi S (2006) Hybridization of genetic algorithm and local search in multiobjective function optimization: recommendation of GA then LS. In: Cattolico M (ed) Proceedings of the genetic and evolutionary computation conference. ACM, New York, pp 667–674

  • Harik G (1995) Finding multimodal solutions using restricted tournament selection. In: Eshelman LJ (ed) Proceedings of the international conference on genetic algorithms. Morgan Kaufmann, Menlo Park, pp 24–31

  • Helmberg C, Rendl F (2000) A spectral bundle method for semidefinite programming. SIAM J Optim 10(3):673–696

    Article  MATH  MathSciNet  Google Scholar 

  • Herrera F, Lozano M (2000) Gradual distributed real-coded genetic algorithms. IEEE Trans Evol Comput 4(1):43–63

    Article  Google Scholar 

  • Holland JH (1975) Adaptation in natural and artificial systems. The University of Michigan Press, Ann Arbor

    Google Scholar 

  • Holm S (1979) A simple sequentially rejective multiple test procedure. Scand J Stat 6:65–70

    MATH  MathSciNet  Google Scholar 

  • Hoos HH, Stützle T (2004) Stochastic local search. Morgan Kaufmann Publishers, San Francisco

    Google Scholar 

  • Iman RL, Davenport JM (1980) Approximations of the critical region of the Friedman statistic. In: Communications in statistics. pp 571–595

  • Ishibuchi H, Hitotsuyanagi Y, Tsukamoto N, Nojima Y (2009) Use of biased neighborhood structures in multiobjective memetic algorithms. Soft Comput 13(8–9):795–810

    Article  Google Scholar 

  • Jones T (1995) Crossover, macromutation, and population-based search. In: Eshelman L (ed) Proceedings of the sixth international conference on genetic algorithms. Morgan Kaufmann, Menlo Park, pp 73–80

    Google Scholar 

  • Karp RM (1972) Reducibility among combinatorial problems. In: Miller R, Thatcher J (eds) Complexity of computer computations. Plenum, NY, pp 85–103

    Google Scholar 

  • Katayama K, Narihisa H (2001) A variant k-opt local search heuristic for binary quadratic programming. Trans IEICE (A) J84-A(3):430–435

    Google Scholar 

  • Katayama K, Narihisa H (2005) An evolutionary approach for the maximum diversity problem. In: Recent advances in memetic algorithms. Springer, Heidelberg, pp 31–47

  • Kauffman SA (1989) Adaptation on rugged fitness landscapes. Lec Sci Complex 1:527–618

    Google Scholar 

  • Kazarlis SA, Papadakis SE, Theocharis JB, Petridis V (2001) Microgenetic algorithms as generalized hill-climbing operators for GA optimization. IEEE Trans Evol Comput 5(3):204–217

    Article  Google Scholar 

  • Kong M, Tian P, Kao Y (2008) A new ant colony optimization algorithm for the multidimensional knapsack problem. Comput Oper Res 35(8):2672–2683

    MATH  MathSciNet  Google Scholar 

  • Krasnogor N, Smith J (2005) A tutorial for competent memetic algorithms: Model, taxonomy and design issues. IEEE Trans Evol Comput 9(5):474–488

    Article  Google Scholar 

  • Laguna M (2003) Scatter search. Kluwer, Boston

    MATH  Google Scholar 

  • Lima CF, Pelikan M, Sastry K, Butz M, Goldberg DE, Lobo FG (2006) Substructural neighborhoods for local search in the bayesian optimization algorithm. In: Proceedings of the international conference on parallel problem solving from nature. LNCS, vol 4193, pp 232–241

  • Lin S, Kernighan BW (1973) An effective heuristic algorithm for the traveling-salesman problem. Oper Res 21(2):498–516

    Article  MATH  MathSciNet  Google Scholar 

  • Lourenço HR, Martin O, Stützle T (2003) Iterated local search. In: Glover F, Kochenberger G (eds) Handbook of metaheuristics, Kluwer, Dordrecht, pp 321–353

  • Lozano M, García-Martínez C (2010) Hybrid metaheuristics with evolutionary algorithms specializing in intensification and diversification: overview and progress report. Comput Oper Res 37:481–497

    Article  MATH  MathSciNet  Google Scholar 

  • Lozano M, Herrera F, Krasnogor N, Molina D (2004) Real-coded memetic algorithms with crossover hill-climbing. Evol Comput 12(3):273–302

    Article  Google Scholar 

  • Mahfoud SW (1992) Crowding and preselection revised. In: Männer R, Manderick B (eds) Parallel problem solving from nature, vol 2. Elsevier Science, London, pp 27–36

    Google Scholar 

  • Marti R (2003) Multi-start methods. In: Glover F, Kochenberger G (eds) Handbook of metaheuristics. Kluwer, Dordrech, pp 355–368

    Google Scholar 

  • Martí R, Moreno-Vega JM, Duarte A (2009) Advanced multi-start methods, 2nd edn. In: Handbook of metaheuristics. Springer, Heidelberg

  • Merz P (2001) On the performance of memetic algorithms in combinatorial optimization. In: Second workshop on memetic algorithms, genetic and evolutionary computation conference. Morgan Kaufmann, Menlo Park, pp 168–173

  • Merz P, Katayama K (2004) Memetic algorithms for the unconstrained binary quadratic programming problem. Biosystems 79(1–3):99–118

    Article  Google Scholar 

  • Moscato P (1999) Memetic algorithms: a short introduction. In: Corne D, Dorigo M, Glover F (eds) New ideas in optimization. McGraw-Hill, NY, pp 219–234

    Google Scholar 

  • Mutoh A, Tanahashi F, Kato S, Itoh H (2006) Efficient real-coded genetic algorithms with flexible-step crossover. Trans Electron Inf Syst 126(5):654–660

    Google Scholar 

  • Nguyen HD, Yoshihara I, Yamamori K, Yasunaga M (2007) Implementation of effective hybrid GA for large-scale traveling salesman problems. IEEE Trans Syst Man Cybern B 37(1):92–99

    Article  Google Scholar 

  • Noman N, Iba H (2008) Accelerating differential evolution using an adaptive local search. IEEE Trans Evol Comput 12(1):107–125

    Article  Google Scholar 

  • O’Reilly UM, Oppacher F (1995) Hybridized crossover-based search techniques for program discovery. In: Proceedings of the world conference on evolutionary computation, vol 2, pp 573–578

  • Peng G, Ichiro I, Shigeru N (2007) Application of genetic recombination to genetic local search in TSP. Int J Inf Technol 13(1):57–66

    Google Scholar 

  • Potts JC, Giddens TD, Yadav SB (1994) The development and evaluation of an improved genetic algorithm based on migration and artificial selection. IEEE Trans Syst Man Cybern 24:73–86

    Article  Google Scholar 

  • Raidl GR (2006) A unified view on hybrid metaheuristics. In: Almeida F, Aguilera MJB Blesa, Blum C, Vega JM Moreno, Pérez M Pérez, Roli A, Sampels M (eds) Hybrid metaheuristics. LNCS, vol 4030. Springer, Heidelberg, pp 1–126

  • Randall M (2006) Search space reduction as a tool for achieving intensification and diversification in ant colony optimisation. In: Ali M, Dapoigny R (eds) LNCS, vol 4031. Springer, Heidelberg, pp 254–262

  • Ray SS, Bandyopadhyay S, Pal SK (2007) Genetic operators for combinatorial optimization in TSP and microarray gene ordering. App Intell 26(3):183–195

    Article  Google Scholar 

  • Resende MGC, Ribeiro CC (2003) Greedy randomized adaptive search procedures. In: Glover F, Kochenberger G (eds) Handbook of metaheuristics. Kluwer, Dordrecht, pp 219–249

    Google Scholar 

  • Sastry K, Goldberg DE (2004) Designing competent mutation operators via probabilistic model building of neighborhoods. In: Deb K, Poli R, Banzhaf W, Beyer H-G, Burk EK, Darwen PJ, Dasgupta D, Floreano D, Foster JA, Harman M, Holland O, Lanzi PL, Spector L, Tettamanzi A, Thierens D, Tyrrel AM (eds) Proceedings of the conference on genetic and evolutionary computation. LNCS, vol 3103, pp 114–125

  • Siarry P, Michalewicz Z (eds) (2008) Advances in metaheuristics for hard optimization. Natural Computing, Springer

  • Smith K, Hoos HH, Stützle T (2003) Iterated robust tabu search for MAX-SAT. In: Carbonell JG, Siekmann J (eds) Proceedings of the Canadian society for computational studies of intelligence conference. LNCS, vol 2671. Springer, Heidelberg, pp 129–144

  • Soak S-M, Lee S-W, Mahalik NP, Ahn B-H (2006) A new memetic algorithm using particle swarm optimization and genetic algorithm. In: Knowledge-based intelligent information and engineering systems. LNCS, vol 4251. Springer, Berlin, pp 122–129

  • Spears WM (2000) Evolutionary algorithms: the role of mutation and recombination. Springer, Heidelberg

    MATH  Google Scholar 

  • Spears WM, De Jong KA (1991) On the virtues of parameterized uniform crossover. In: Belew R, Booker LB (eds) Proceedings of the international conference on genetic algorithms. Morgan Kaufmann, Menlo Park, pp 230–236

    Google Scholar 

  • Sywerda G (1989) Uniform crossover in genetic algorithms. In: Schaffer JD (ed) Proceedings of the international conference on genetic algorithms. Morgan Kaufmann, Menlo Park, pp 2–9

    Google Scholar 

  • Talbi EG (2002) A taxonomy of hybrid metaheuristics. J Heuristics 8(5):541–564

    Article  Google Scholar 

  • Thierens D (2004) Population-based iterated local search: restricting neighborhood search by crossover. In: Deb K, Poli R, Banzhaf W, Beyer H-G, Burk EK, Darwen PJ, Dasgupta D, Floreano D, Foster JA, Harman M, Holland O, Lanzi PL, Spector L, Tettamanzi A, Thierens D, Tyrrel AM (eds) Proceedings of the genetic and evolutionary computation conference. LNCS, vol 3103. Springer, Heidelberg, pp 234–245

  • Tsai H-K, Yang J-M, Tsai Y-F, Kao C-Y (2004) An evolutionary algorithm for large traveling salesman problems. IEEE Trans Syst Man Cybern 34(4):1718–1729

    Article  Google Scholar 

  • Tsutsui S, Ghosh A, Corne D, Fujimoto Y (1997) A real coded genetic algorithm with an explorer and an exploiter population. In: Bäck T (ed) Proceedings of the international conference on genetic algorithms. Morgan Kaufmann, Menlo Park, pp 238–245

    Google Scholar 

  • Ventura S, Romero C, Zafra A, Delgado JA, Hervás-Martínez C (2008) JCLEC: A java framework for evolutionary computation. Soft Comput 12(4):381–392

    Article  Google Scholar 

  • Wang H, Wang D, Yang S (2009) A memetic algorithm with adaptive hill climbing strategy for dynamic optimization problems. Soft Comput 13(8–9):763–780

    Article  Google Scholar 

  • Whitley D (1989) The GENITOR algorithm and selection pressure: why rank-based allocation of reproductive trials is best. In: Schaffer JD (ed) Proceedings of the international conference on genetic algorithms. Morgan Kaufmann, Menlo Park, pp 116–121

    Google Scholar 

  • Zar JH (1999) Biostatistical analysis. Prentice Hall, Englewood Cliffs

    Google Scholar 

Download references

Acknowledgments

This work was supported by Research Projects TIN2008-05854 and P08-TIC-4173.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carlos García-Martínez.

Appendices

Appendix 1: A test suite

The test suite that we have used for the experiments consists of 22 binary-coded test problems. They are described in the following sections.

1.1 Deceptive problem

In deceptive problems (Goldberg et al. 1989), there are certain schemata that guide the search toward some solution that is not globally competitive. The schemata that have the global optimum do not bear significance and so they may not proliferate during the genetic process. The used deceptive problem consists of the concatenation of k subproblems of length 3. The fitness for each 3-bit section of the string is given in Table 13. The overall fitness is the sum of the fitness of these deceptive subproblems.

Table 13 Deceptive order-3 problem

We have used a deceptive problem with 13 subproblems.

1.2 Trap problem

Trap problem (Thierens 2004) consists of misleading subfunctions of different lengths. Specifically, the fitness function \(f(x)\) is constructed by adding subfunctions of length 1 (\(F_1\)), 2 (\(F_2\)), and 3 (\(F_3\)). Each subfunction has two optima: the optimal fitness value is obtained for an all-ones string, while the all-zeroes string represents a local optimum. The fitness of all other string in the subfunction is determined by the number of zeroes: the more zeroes, the higher the fitness value. This causes a large basin of attraction toward the local optimum. The fitness values for the subfunctions are specified in Table 14 where the columns indicate the number of ones in the subfunctions \(F_1,\) \(F_2,\) and \(F_3.\) The fitness function \(f(x)\) is composed of 4 \(F_3\) subfunctions, 6 \(F_2\) subfunctions, and 12 \(F_1\) subfunctions. The overall length of the problem is thus 36. There are 210 optima of which only one is the global optimum: the string with all ones having a fitness value of 220.

$$ \begin{aligned} f(x)=&\sum_{i=0}^3 F_3(x_{[3i:3i+2]}) + \sum_{i=0}^5 F_2(x_{[2i+12:2i+13]}) \\ \qquad+&\sum_{i=0}^{11} F_1(x_{24+i}) \\ \end{aligned} $$
Table 14 Fitness values of the subfunctions F i

1.3 Max-Sat Problem

The satisfiability problem in propositional logic (SAT) (Smith et al. 2003) is the task to decide whether a given propositional formula has a model. More formally, given a set of m clauses \(\{C_1,\ldots,C_m\}\) involving n boolean variables \(X_1, \ldots, X_n\) the SAT problem is to decide whether an assignment of values to variables exists such that all clauses are simultaneously satisfied.

Max-Sat is the optimization variant of SAT and can be seen as a generalization of the SAT problem: given a propositional formula in conjunctive normal form (CNF), the Max-Sat problem then is to find a variable assignment that maximizes the number of satisfied clauses. It returns the percentage of satisfied clauses.

We have used two set of instances of the Max-Sat problem with 100 variables, 3 variables by clause, and 1,200 and 2,400 clauses, respectively. They have been obtained from De Jong et al. (1997). They are denoted as M-Sat(n, m, l), where l indicates the number of variables involved in each clause (3). Each run i, of every algorithm, uses a specific seed (\({\rm seed}_i\)) for generating the M-Sat(n, m, l) instance, i.e. ith execution of every algorithm uses the same \({\rm seed}_i,\) whereas jth execution uses \({\rm seed}_j.\)

1.4 NK-landscapes

In the NK model (Kauffman 1989), N represents the number of genes in a haploid chromosome and K represents the number of linkages each gene has to other genes in the same chromosome. To compute the fitness of the entire chromosome, the fitness contribution from each locus is averaged as follows:

$$ f(s)={\frac{1}{N}} \sum_{i=1}^N f({\rm locus}_i) $$
(1)

where the fitness contribution of each locus, \(f({\rm locus}_i),\) is determined by using the (binary) value of gene i together with values of the K interacting genes as an index into a table \(T_i\) of size \(2^{K+1}\) of randomly generated numbers uniformly distributed over the interval \([0, 1].\) For a given gene i, the set of K linked genes may be randomly selected or consist of the immediately adjacent genes.

We have used two set of instances of the NK-Landscape problem: one with \(N = 48\) and \(K = 4,\) and another with \(N = 48\) and \(K = 12.\) They are denoted as NKLand (N, K). They have been obtained from De Jong et al. (1997). Each run i, of every algorithm, uses a different seed (\({\rm seed}_i\)) for generating the NKLand (N, K) instance, i.e. the ith execution of every algorithm has used the same \({\rm seed}_i,\) whereas the jth execution has used \({\rm seed}_j.\)

1.5 P-peak problems

P-peak problem generator (Spears 2000) creates instances with a certain number of peaks (the degree of multi-modality). For a problem with P peaks, P bit strings of length L are randomly generated. Each of these string is a peak (a local optima) in the landscape. Different heights can be assigned to different peaks based on various schemes (equal height, linear, logarithm-based, and so on). To evaluate an arbitrary solution S, first locate the nearest peak in Hamming space, call it \({\rm Peak}_n(S).\) Then, the fitness of s is the number of bits the string has in common with \({\rm Peak}_n(S),\) divided by L, and scaled by the height of the nearest peak. In case there is a tie when finding the nearest peak, the highest peak is chosen.

We have used different groups of P-Peak instances denoted as PPeaks(PL). Each run i, of every algorithm, uses a different seed (\({\rm seed}_i\)) for generating the PPeaks(PL) instance. Linear scheme have been used for assigning heights to peaks in \([0.6, 1].\)

1.6 Max-cut problem

The Max-cut problem (Karp 1972) is define as follows: Let an undirected and connected graph \(G=(V,E),\) where \(V=\{1,2,\ldots,n\}\) and \(E \subset\{(i,j) : 1 \leq i < j \leq n\},\) be given. Let the edge weights \(w_{ij} = w_{ji}\) be given such that \(w_{ij}=0\) \(\forall (i,j) \not \in E,\) and in particular, let \(w_{ii}=0.\) The max-cut problem is to find a bipartition \((V_1,V_2)\) of V so that the sum of the weights of the edges between \(V_1\) and \(V_2\) is maximized.

We have used 6 instances of the max-cut problem (G10, G12, G17, G18, G19 G43), obtained from Helmberg and Rendl (2000).

1.7 Unconstrained binary quadratic programming problem

The objective of the Unconstrained Binary Quadratic Programming (BQP) (Beasley 1998) is to find, given a symmetric rational \(n \times n\) matrix \(Q=(Q_{ij}),\) a binary vector of length n that maximizes the following quantity:

$$ f(x)=x^t Q x= \sum_{i=1}^n \sum_{j=1}^n q_{ij} x_i x_j, \quad x_i \in \{0,1\} $$
(2)

We have used four instances with different values for n. They have been taken from the OR-Library (Beasley 1990). They are the first instances of the BQP problems in the files ‘bqp50’, ‘bqp100’, ‘bqp250’, and ‘bqp500’. They are BQP(50), BQP(100), BQP(250), and BQP(500), respectively.

Appendix 2: Results

Tables 15 and 16 show the fitness values obtained by the algorithms studied in the empirical analysis in Sects. 5, 6, and 7. Best results of every group of algorithms (RMLS, ILS, and VNS) are boldfaced. Table 17 displays the number of successful restarts obtained by the algorithms in Sect. 7.1. Best results of every group with the same LS method are marked.

Table 15 Results of the RMLS, ILS-0.1, and ILS-0.25 algorithms with each refinement procedure
Table 16 Results of the ILS-0.5, ILS-0.75, and VNS algorithms with each refinement procedure
Table 17 Number of successful restarts for each LS-based MH

Appendix 3: Statistical analysis

In this section, we explain the basic functionality of each non-parametric test applied in this study together with the aim pursued with its use:

  • Friedman test: Although we will not use this test, because of its conservative undesirably effect, we describe it because it is the basis of the following one. Friedman test is a non-parametric equivalent of test of repeated-measures ANOVA. It computes the ranking of the observed results for each algorithm (\(r_j\) for the algorithm j with k algorithms) for each function, assigning to the best of them the ranking 1, and to the worst the ranking k. Under the null hypothesis, formed from supposing that the results of the algorithms are equivalent and, therefore, their average rankings are also similar, the Friedman statistic

    $$ \chi^2_F = {\frac{12N}{k(k + 1)}} \left [\sum_j R^2_j - {\frac{k(k+1)^2}{4}} \right] $$
    (3)

    is distributed according to \(\chi^2_F\) with \(k - 1\) degrees of freedom, being \(R_j = 1 / N \sum_i r^i_j,\) and N the number of functions. The critical values for the Friedman statistic coincide with the established in the \(\chi^2\) distribution when \(N > 10\) and \(k > 5.\) In a contrary case, the exact values can be seen in Zar (1999).

  • Iman and Davenport test (Iman and Davenport 1980): It is a metric derived from the Friedman statistic given that this last metric produces a conservative undesirably effect. The statistic is

    $$ F_F = {\frac{(N - 1)\chi^2_F}{N(k - 1) - \chi^2_F}} $$
    (4)

    and it is distributed according to a F distribution with \(k - 1\) and \((k - 1)(N - 1)\) degrees of freedom.

  • Holm method (Holm 1979): If the null hypothesis is rejected in Iman–Davenport test, we can proceed with a post-hoc test. The test of Holm is applied when we want to compare a control algorithm (the one with the best average Friedman ranking) opposite to the remainders. Holm test sequentially checks the hypotheses ordered according to their significance. We will denote the p-values ordered by \(p_1, p_2, \ldots,\) in the way that \(p_1 \leq p_2 \leq \cdots \leq p_{k-1}.\) Holm method compares each \(p_i\) with \(\alpha / (k - i)\) starting from the most significant p-value. If \(p_1\) is below than \(\alpha / (k - 1),\) the corresponding hypothesis is rejected and it leaves us to compare \(p_2\) with \(\alpha / (k - 2).\) If the second hypothesis is rejected, we continue with the process. As soon as a certain hypothesis cannot be rejected, all the remaining hypotheses are maintained as accepted. The statistic for comparing algorithm i with algorithm j is:

    $$ z = (R_i - R_j) / \sqrt{{\frac{k (k + 1)}{6N}}} $$
    (5)

    The value of z is used for finding the corresponding probability from the table of the normal distribution, which is compared with the corresponding value of \(\alpha.\)

  • Wilcoxon signed rank test: This is the analogous of the paired t test in non-parametrical statistical procedures; therefore, it is a pairwise test that aims to detect significant differences between the results of two algorithms. Let \(d_i\) be the difference between the performance scores of two algorithms on the ith out of N functions (we have normalized the results on every function to be in \( [ 0, 1 ] \) according to the best and worst results obtained by all the algorithms). The differences are ranked according to their absolute values; average ranks are assigned in case of ties. Let \(R^+\) be the sum of ranks for the functions on which the second algorithm outperformed the first, and \(R^-\) the sum of ranks for the opposite. Ranks of \(d_i = 0\) are split evenly among the sums; if there is an odd number of them, one is ignored:

    $$ R^+= \sum_{d_i > 0} {\rm rank}(d_i) + 1 / 2 \sum_{d_i = 0} {\rm rank}(d_i) $$
    (6)
    $$ R^-= \sum_{d_i < 0} {\rm rank}(d_i) + 1 / 2 \sum_{d_i = 0} {\rm rank}(d_i) $$
    (7)

    Let T be the smallest of the sums, \(T = min(R^+,R^-).\) If T is less than or equal to the value of the distribution of Wilcoxon for N degrees of freedom [Table B.12 in Zar (1999)], the null hypothesis of equality of means is rejected.

Rights and permissions

Reprints and permissions

About this article

Cite this article

García-Martínez, C., Lozano, M. Evaluating a local genetic algorithm as context-independent local search operator for metaheuristics. Soft Comput 14, 1117–1139 (2010). https://doi.org/10.1007/s00500-009-0506-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-009-0506-1

Keywords

Navigation