Skip to main content

Advertisement

Log in

Diversification methods for zero-one optimization

  • Published:
Journal of Heuristics Aims and scope Submit manuscript

Abstract

We introduce new diversification methods for zero-one optimization that significantly extend strategies previously introduced in the setting of metaheuristic search. Our methods incorporate easily implemented strategies for partitioning assignments of values to variables, accompanied by processes called augmentation and shifting which create greater flexibility and generality. We then show how the resulting collection of diversified solutions can be further diversified by means of permutation mappings, which equally can be used to generate diversified collections of permutations for applications such as scheduling and routing. These methods can be applied to non-binary vectors using binarization procedures and by diversification-based learning procedures that provide connections to applications in clustering and machine learning. Detailed pseudocode and numerical illustrations are provided to show the operation of our methods and the collections of solutions they create.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. A different limiting value for g is proposed in Glover (1997), consisting of \( {\text{gMax}} = \left\lfloor {{\text{n}}/5} \right\rfloor \). The rationale for this upper limit in both cases is based on the fact that as g grows, the difference between x and x' becomes smaller, and hence a bound is sought that will prevent x' from becoming too similar to x.

References

  • Campos, V., Glover, F., Laguna, M., Martí, R.: An experimental evaluation of a scatter search for the linear ordering problem. J. Global Optim. 21, 397–414 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  • Campos, V., Laguna, M., Martí, R.: Context-independent scatter and tabu search for permutation problems. INFORMS J. Comput. 17(1), 111–122 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  • Duarte, A., Martí, R.: Tabu search for the maximum diversity problem. Eur. J. Oper. Res. 178, 71–84 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  • Gallego, M., Laguna, M., Martí, R., Duarte, A.: Tabu search with strategic oscillation for the maximally diverse grouping problem. J. Oper. Res. Soc. 64(5), 724–734 (2013)

    Article  Google Scholar 

  • Glover, F.: Heuristics for integer programming using surrogate constraints. Decis. Sci. 8, 156–166 (1977)

    Article  Google Scholar 

  • Glover, F.: Tabu search for nonlinear and parametric optimization (with links to genetic algorithms). Discrete Appl. Math. 49, 231–255 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  • Glover, F.: A template for scatter search and path relinking. In: Hao J.-K., Lutton E., Ronald E., Schoenauer M., Snyers D. (eds.) Artificial Evolution, Lecture Notes in Computer Science 1363. Springer, Berlin, pp. 13–54 (1997)

  • Glover, F.: Scatter search and path relinking. In: Corne, D., Dorigo, M., Glover, F. (eds.) New Ideas in Optimization, pp. 297–316. McGraw Hill, New York (1999)

    Google Scholar 

  • Glover, F.: Multi-start and strategic oscillation methods: principles to exploit adaptive memory. In: Laguna, M., Gonzales Velarde, J.L. (eds.) Computing Tools for Modeling, Optimization and Simulation: Interfaces in Computer Science and Operations Research, pp. 1–24. Kluwer Academic Publishers, Dordrecht (2000)

    Google Scholar 

  • Glover, F.: Adaptive memory projection methods for integer programming. In: Rego, C., Alidaee, B. (eds.) Metaheuristic Optimization Via Memory and Evolution, pp. 425–440. Kluwer Academic Publishers, Dordrecht (2005)

    Chapter  Google Scholar 

  • Glover, F., Hao, J.-K.: Diversification-based learning in computing and optimization. J. Heuristics (2018, in press)

  • Glover, F., Laguna, M.: Tabu Search. In: Reeves, C. (ed.) Modern Heuristic Techniques for Combinatorial Problems, pp. 71–140. Blackwell Scientific Publishing, Oxford (1993)

    Google Scholar 

  • Laguna, M., Martí, R.: Scatter Search: Methodology and Implementations in C. Kluwer Academic Publishers, Boston (2003). ISBN 1-4020-7376-3

    Book  MATH  Google Scholar 

  • Mayoraz, E., Moreira, M.: Combinatorial approach for data binarization. In: Principles of Data Mining and Knowledge Discovery, vol. 1704 of the Series Lecture Notes in Computer Science, pp. 442–447 (1999)

Download references

Acknowledgements

This research has been supported in part by the Key Laboratory of International Education Cooperation of Guangdong University of Technology.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weihong Xie.

Appendices

Appendix 1: The progressive gap (PG) method

We slightly modify the original description of the Progressive Gap method to clarify its main components and to give a foundation for the Extended PG method described below.

Notation for the PG method

  • g = a gap value

  • s = a starting index

  • k = an increment index

Method overview

Starting with the seed vector x, successive vectors x’ are generated by complementing specific components xj of x. A gap value g is used that iteratively varies over the range g = 1 to gMax, where gMax = \( \left\lfloor {{\text{n}}^{.5} + .5} \right\rfloor \).Footnote 1 Then, for each gap g, a starting index s iterates from s = 1 to sLim, where sLim = g except in the special case where g = 2 where sLim is restricted to 1 (to avoid a duplication among the vectors x’ generated).

From the initial assignment x’ = x, the method sets xj’ = 1 − xj for the index j = s + kg, as the increment index k ranges from 0 to \( {\text{kMax}} = \left\lfloor {\left( {{\text{n}}{-}{\text{s}}} \right)/{\text{g}}} \right\rfloor \). Thus, xj’ receives this complemented value of xj for j = s, s + g, s + 3 g, …, thereby causes each j to be separated from the previous j by the gap of g. (The actual gap between two successive values of j is thus g − 1. For example, when g = 1, the values j and j + g = j + 1 are adjacent, and in this sense have a “0 gap” between them.) The indicated formula for the maximum value of k sets kMax as large as possible, subject to assuring j does not exceed n (when j attains its largest value j = s + kMax∙g). Each time a vector x’ is generated, the corresponding vector x” = Comp(x’) is also generated. This simple pattern is repeated until no more gaps g or starting values s remaining to be considered.

PG Algorithm

figure f

Remark

The method can avoid generating x” = Comp(x’) when x’ is the first vector generated (i.e., x’ = x(1)), since in this case Comp(x’) = x, thus yielding the seed vector (x(0)).

To illustrate for the case where the seed vector is x = (0, 0, …, 0), the procedure generates the following vectors x’ for the sampling of values shown for the starting index s and the gap g. Note that the vector x’ for s = 2, g = 2 (marked with a “*” below) duplicates the complement of the x’ vector for s = 1, g = 2. This is the reason the algorithm restricts the value sLim to 1 when g = 2, thus causing the vector for s = 2, g = 2 to be skipped.

$$ \begin{array}{*{20}l} {{\text{s}} = 1,\;{\text{g}} = 2: \left( {\begin{array}{*{20}l} 1 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & \ldots \hfill \\ \end{array} } \right)} \hfill \\ {{\text{s}} = 1,\;{\text{g}} = 3: \left( {\begin{array}{*{20}l} 1 \hfill & 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill & 1 \hfill & \ldots \hfill \\ \end{array} } \right)} \hfill \\ {{\text{s}} = 2,\;{\text{g}} = 2: \left( {\begin{array}{*{20}l} 0 \hfill & 1 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill & \ldots \hfill \\ \end{array} } \right)*} \hfill \\ {{\text{s}} = 2,\;{\text{g}} = 3: \left( {\begin{array}{*{20}l} 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill & \ldots \hfill \\ \end{array} } \right)} \hfill \\ {{\text{s}} = 3,\;{\text{g}} = 2: \left( {\begin{array}{*{20}l} 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & \ldots \hfill \\ \end{array} } \right)} \hfill \\ {{\text{s}}\; = \;3,\;{\text{g}} = 3: \left( {\begin{array}{*{20}l} 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & \ldots \hfill \\ \end{array} } \right)} \hfill \\ \end{array} $$

Extended version

The Extended PG Method can be used to generate a larger number of points and provides an additional form of variation in the vectors generated. Applied in its complete form, the method can involve O(n3) effort, and hence it constitutes a method that either should be applied just once during the execution of a search algorithm to provide a pool to draw upon throughout subsequent iterations or else should be applied in successive installments, as described in Sect. 6.

Brief overview

The extended method “fills in spaces” between successive j values that determine the assignment xj’ = 1 − xj. The method makes this assignment for a string of j values from j = j1 to j2, where j2 is chosen to leave an unassigned position between j2 and the next value of j1 given by j1 = jj + g. Consequently, j2 = j1 + g − 2 (and the method chooses j2 = j1 until g > 2.)

The resulting algorithm avoids referring to a starting index s to identify the location of the “first j value” at which xj’ = 1 − xj. Instead, the starting value is always j = 1. This results from the fact that the complements x” produced for the x’ vectors automatically include all of the vectors x’ that would be derived by using different starting indexes s.

The extended algorithm is stated as follows.

Extended PG Algorithm

figure g

The PG Algorithm can be extended in additional ways, but we restrict attention to the preceding approach as the primary variation. Combining either the PG Algorithm its extension with Algorithm 3 will succeed in producing an additional collection of diversified vectors if still more such vectors are sought.

Appendix 2: A “balanced” variant of the Max/Min algorithm

The idea underlying the Balanced Variant of Algorithm 2 is to assure that sets N(i) with an odd number of elements SetSize are split so that \( \left\lfloor {{\text{SetSize}}/2} \right\rfloor \) of their elements go into NL(i) when an odd number of such sets have been encountered and \( \left\lceil {{\text{SetSize}}/2} \right\rceil \) of their elements go into NL(i) when an even number of such sets have been encountered. The rule is applied anew at each iteration (each successive value of Iter), when creating a new partition from the current sets N(i) for i = 1 to iLast.

The “balanced” terminology comes from the fact that this approach will tend to balance the number of variables xj that are complemented and not complemented to produce the vector x’ generated on the current iteration. When this approach is not used, the order in which the current N(i) sets occur could cause each set with |N(i)| odd to be split in the same way, putting \( \left\lceil {{\text{SetSize}}/2} \right\rceil \) (or \( \left\lfloor {{\text{SetSize}}/2} \right\rfloor \)) elements in NL(i), thus causing the number of complemented xj to exceed the number of xj not complemented (or vice versa).

When the Balanced Variant is used, the final assignment to be made (following the determination that MaxNum = 2) has a simple form that allows x’ and x” to be created by the following shortcut step.

$$ {\text{x}}_{\text{j}}^{{\prime }} = 1{-}{\text{x}}_{\text{j}} \;{\text{if}}\;{\text{j}}\;{\text{is}}\;{\text{odd}} $$
(1′)
$$ {\text{x}}_{\text{j}}^{{\prime }} = {\text{x}}_{\text{j}} \;{\text{if}}\;{\text{j}}\;{\text{is}}\;{\text{even}} $$
(2′)

and

$$ {\text{x}}_{\text{j}}^{{\prime \prime }} = {\text{x}}_{\text{j}} \;{\text{if}}\;{\text{j}}\;{\text{is}}\;{\text{odd}} $$
(3′)
$$ {\text{x}}_{\text{j}}^{{\prime \prime }} = 1{-}{\text{x}}_{\text{j}} \;{\text{if}}\;{\text{j}}\;{\text{is}}\;{\text{even}} $$
(4′)

Consequently, when MaxNum = 2, the method immediately makes this simplified final assignment and then stops.

The detailed form of this approach is as follows. A logical variable named OddSet keeps track of whether an even odd number of sets with |N(i)| odd have been encountered.

Balanced variant of the Max/Min generation method

figure hfigure h

Appendix 3: Strongly balanced vector generation

We consider a recursive process to generate diverse vectors that are not only composed of approximately half 0’s and half 1’s, but that additionally are strongly balanced in the sense that every successive pair of elements consists of a single 0 and a single 1. We start with the case for p = 2 and consider just the 2 vectors that contain exactly one 0 and one 1, which are complements of each other:

$$ {\text{y}}\left( 1 \right) = \left( {1,0} \right)\;{\text{and}}\;{\text{y}}\left( 2 \right) = \left( {0,1} \right). $$

We could use these vectors by themselves to generate the two x vectors given by x(1) = (y(1), y(1), …) and x(2) = (y(2), y(2), …), which also are complements of each other.

Now we consider all ways of pairing these two vectors, thus obtaining all vectors of the form (y(p), y(q)) for p, q = 1, 2 (i.e., (y(1), y(1)), (y(1), y(2)), …, etc.) From this we obtain the 4 new vectors

$$ {\text{y}}\left( 1 \right) = \left( {1,0,1,0} \right),\;{\text{y}}\left( 2 \right) = \left( {1,0,0,1} \right),\;{\text{y}}\left( 3 \right) = \left( {0,1,1,0} \right),\;{\text{y}}\left( 4 \right) = \left( {0,1,0,1} \right) $$

The complement of each of these vectors is also contained in the collection generated. (For example, y(1) and y(4) are complements, and y(2) and y(3) are complements.) Moreover, these y vectors satisfy the strongly balanced property where every successive two components of these vectors consists of one 0 and one 1.

Again, we can form the vectors x(h) = (y(h), y(h), ….) for h = 1 to 4 and the complement of each vector is likewise in the collection. (This holds even if the last y(h) vector in each x(h) must be truncated so that x(h) has dimension n.) Similarly, every two successive components of each vector consists of one 0 and one 1, although if n is odd there will not be a final “second component” to pair with xn(h).

To take this process one step farther, we combine the vectors y(1) through y(4) to produce all possible pairs (y(p), y(q)) for p, q = 1, 2, 3, 4. The 4 × 4 = 16 resulting combinations are shown below.

h

y(p)

y(q)

1

1

0

1

0

1

0

1

0

2

1

0

1

0

0

1

0

1

3

1

0

1

0

0

1

1

0

4

1

0

1

0

1

0

0

1

5

0

1

0

1

1

0

1

0

6

0

1

0

1

0

1

0

1

7

0

1

0

1

0

1

1

0

8

0

1

0

1

1

0

0

1

9

0

1

1

0

1

0

1

0

10

0

1

1

0

0

1

0

1

11

0

1

1

0

0

1

1

0

12

0

1

1

0

1

0

0

1

13

1

0

0

1

1

0

1

0

14

1

0

0

1

0

1

0

1

15

1

0

0

1

0

1

1

0

16

1

0

0

1

1

0

0

1

As before, the complement of every vector is also contained in the collection, and every two successive elements consists of one 0 and one 1. By stringing these vectors together to produce vectors x(h) = (y(h), y(h), …), the resulting x(h) vectors will include vectors produced for the previous level when h ranged from 1 to 4.

If it is desired to go farther, we may produce the pairs (y(p), y(q)) from this collection to produce 16 × 16 = 256 new y(h) vectors, each containing 8 + 8 = 16 components. These strongly balanced vectors do not possess some of the key features of vectors generated by the other methods described in this paper, and hence produce collections that are less diversified. Nevertheless, we anticipate that their novel structure may prove useful in certain types of applications.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Glover, F., Kochenberger, G., Xie, W. et al. Diversification methods for zero-one optimization. J Heuristics 25, 643–671 (2019). https://doi.org/10.1007/s10732-018-9399-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10732-018-9399-4

Keywords