Skip to main content

Advertisement

Log in

Balanced-evolution genetic algorithm for combinatorial optimization problems: the general outline and implementation of balanced-evolution strategy based on linear diversity index

  • Published:
Natural Computing Aims and scope Submit manuscript

Abstract

How to rationally inject randomness to control population diversity is still a difficult problem in evolutionary algorithms. We propose balanced-evolution genetic algorithm (BEGA) as a case study of this problem. Similarity guide matrix (SGM) is a two-dimensional matrix to express the population (or subpopulation) distribution in coding space. Different from binary-coding similarity indexes, SGM is able to be suitable for binary-coding and symbol-coding problems, simultaneously. In BEGA, opposite-direction and forward-direction regions are defined by using two SGMs as reference points, respectively. In opposite-direction region, diversity subpopulation always tries to increase Hamming distances between themselves and the current population. In forward-direction region, intensification subpopulation always tries to decrease Hamming distances between themselves and the current elitism population. Thus, diversity subpopulation is more suitable for injecting randomness. Linear diversity index (LDI) measures the individual density around the center-point individual in coding space, which is characterized by itself linearity. According to LDI, we control the search-region ranges of diversity and intensification subpopulations by using negative and positive perturbations, respectively. Thus, the search efforts between exploration and exploitation are balanced. We compared BEGA with CHC, dual-population genetic algorithm, variable dissortative mating genetic algorithm, quantum-inspired evolutionary algorithm, and greedy genetic algorithm for 12 benchmarks. Experimental results were acceptable. In addition, it is worth noting that BEGA is able to directly solve bounded knapsack problem (i.e. symbol-coding problem) as one EA-based solver, and does not transform bounded knapsack problem into an equivalent binary knapsack problem.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

Download references

Acknowledgements

The authors would like to thank anonymous reviewers for their constructive comments, especially for improving the concepts of similarity guide matrix and linear diversity index. This work was supported by National Natural Science Foundation of China (Grant No. 61272518) and YangFan Innovative and Entrepreneurial Research Team Project of Guangdong Province.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to HongGuang Zhang.

Appendices

Appendix 1

Trap problem includes k basic functions, whose fitness is equal to the fitness sum of k basic functions (García-Martínez and Lozano 2008). The best solution of a basic function, with all ones, has a fitness value of 220 (Table 3). The basic function is defined by

$$f(X) = \sum\limits_{i = 0}^{3} {F_{3} (X_{{\left[ {3i:3i + 2} \right]}} )} + \sum\limits_{i = 0}^{5} {F_{2} (X_{{\left[ {12 + 2i:13 + 2i} \right]}} )} + \sum\limits_{i = 0}^{11} {F_{1} (X_{{\left[ {24 + i} \right]}} )}$$
(21)
Table 3 Basic functions in trap problem. ONESUM is the number of bits, whose value is equal to one

Analogous to trap problem, basic functions in order-3, order-4, and partially deceptive problems (García-Martínez and Lozano 2008; Baluja 1992) are given in Tables 4, 5, and 6, respectively.

Table 4 Order-3 deceptive problem
Table 5 Order-4 deceptive problem
Table 6 Order-4 partially deceptive problem

Overlapping deceptive problem (Pelikan et al. 2000) is defined by

$$f(X) = \sum\limits_{i = 1}^{N - 2} {f_{d} (X_{[i:i + 2]} )}$$
(22)
$$f_{d} (X_{[i:i + 2]} ) = \left\{ {\begin{array}{*{20}c} {0.9} \\ {0.8} \\ 0 \\ 1 \\ \end{array} } \right.\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ \end{array} \begin{array}{*{20}c} {u = 0} \\ {u = 1} \\ {u = 2} \\ {u = 3} \\ \end{array}$$
(23)

where i is the first position of each substring X[i:i+2], N is coding length, and u is the number of ones in the substring X[i:i+2].

PPeaks problem (Spears 2000), whose optimal value is 1.0, is defined by

$$f(X) = \frac{1}{N}\max_{i = 0}^{p - 1} \{ N - Hamdis(X,Peak_{i} )\}$$
(24)

where Hamdis() returns Hamming distance between X and Peaki (i.e. a N-bit string).

Binary knapsack problem is as follow. Let pj be the profit of type-j item, let wj be the weight of type-j item, and C is the weight capacity of the knapsack. X = {x1, x2, … xj, … xN} is a binary decision variable. If type-j item is loaded in the knapsack, xj = 1. Otherwise, xj = 0. Binary knapsack problem is defined by using Eqs. (25) and (27). First, we use the methods of generating uncorrelated, weakly correlated, and strongly correlated datasets (Martello et al. 1999; Pisinger 1999; Truong et al. 2013). Uncorrelated dataset: pj and wj are randomly distributed in (10, R). Weakly correlated dataset: wj is randomly distributed in (1, R), and pj (pj ≥ 1) is randomly distributed in (wj − R/10, wj + R/10). Strongly correlated dataset: wj is randomly distributed in (1, R), and pj is wj + 10. In this paper, R is 100. Secondly, we use the constraint handling method (Zitzler 1999) as follows. Items with the lowest profit/weight ratio qj (i.e. qj = pj/wj 1 ≤ j ≤ N) are removed first. Items are removed one by one, until the capacity constraint is satisfied.

$$Maximize\sum\limits_{j = 1}^{N} {p_{j} x_{j} }$$
(25)
$$C = 0.5\sum\limits_{j = 1}^{N} {w_{j} }$$
(26)

subject to

$$\sum\limits_{j = 1}^{N} {w_{j} x_{j} } \le C$$
(27)

Bounded knapsack problem is also formulated by using Eqs. (25) and (27). The difference of bounded knapsack problem is that xj expresses how many type-j item is loaded in the knapsack. First, we also use the same methods of generating test datasets (Martello et al. 1999; Pisinger 1999) for bounded knapsack problem. Secondly, similar to binary knapsack problem, the difference of bounded knapsack problem is that the constraint handling method gradually decreases the number of each item. In this paper, bmax is 4 for bounded knapsack problem. According to 1 ≤ bj ≤ bmax, bj is randomly generated. We assume that pj, wj, bj, and C are greater than 0 and

$$\sum\limits_{j = 1}^{N} {w_{j} b_{j} } > C$$
(28)
$$w_{j} b_{j} \le C\begin{array}{*{20}c} {} & {1 \le j \le N} \\ \end{array}$$
(29)
$$C = 0.5\sum\limits_{j = 1}^{N} {w_{j} } b_{j}$$
(30)

Appendix 2

CHC uses cross generational elitist selection, heterogeneous recombination, and cataclysmic mutation (Eshelman 1991). Two parents are only allowed to mate, when Hamming distance between two parents is greater than the threshold. CHC only carries out mutation to reinitialize the population by keeping the best individual, when the threshold drops to zero.

DPGA (Park and Ryu 2010) and VDMGA (Fernandes and Rosa 2006, 2008) are given in Algorithms 5 and 6, respectively.

figure e
figure f

QEA (Han and Kim 2002, Platel et al. 2009) is inspired by the principle of quantum computing. Quantum gate U(θ) is given in Eq. (31), and θ is equal to s(αjβj) × Delta in Table 7.

Table 7 Lookup table of θ
$$U(\theta ) = \left[ {\begin{array}{*{20}c} {\cos (\theta )} & { - \sin (\theta )} \\ {\sin (\theta )} & {\cos (\theta )} \\ \end{array} } \right]$$
(31)

In the bounded-knapsack-problem (i.e. symbol-coding problem) field, dynamic programming, branch-and-bound algorithm, and reduction algorithm are frequently used (Martello and Toth 1990). Another kind of algorithm is to transform bounded knapsack problem into an equivalent binary knapsack problem (Martello and Toth 1990). However, this implies much more computation cost, because coding length increases. In EAs, there is little research, which directly solves bounded knapsack problem. Thus, GGA is used as the compared algorithm of bounded knapsack problem.

figure g

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, H., Liu, Y. & Zhou, J. Balanced-evolution genetic algorithm for combinatorial optimization problems: the general outline and implementation of balanced-evolution strategy based on linear diversity index. Nat Comput 17, 611–639 (2018). https://doi.org/10.1007/s11047-018-9670-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11047-018-9670-5

Keywords

Navigation