Skip to main content

Advertisement

Log in

Static and Self-Adjusting Mutation Strengths for Multi-valued Decision Variables

  • Published:
Algorithmica Aims and scope Submit manuscript

Abstract

The most common representation in evolutionary computation are bit strings. With very little theoretical work existing on how to use evolutionary algorithms for decision variables taking more than two values, we study the run time of simple evolutionary algorithms on some OneMax-like functions defined over \(\varOmega = \{0, 1, \ldots , r-1\}^n\). We observe a crucial difference in how we extend the one-bit-flip and standard-bit mutation operators to the multi-valued domain. While it is natural to modify a random position of the string or select each position of the solution vector for modification independently with probability 1/n, there are various ways to then change such a position. If we change each selected position to a random value different from the original one, we obtain an expected run time of \(\varTheta (nr \log n)\). If we change each selected position by \(+1\) or \(-1\) (random choice), the optimization time reduces to \(\varTheta (nr + n\log n)\). If we use a random mutation strength \(i \in \{0,1,\ldots ,r-1\}\) with probability inversely proportional to i and change the selected position by \(+i\) or \(-i\) (random choice), then the optimization time becomes \(\varTheta (n \log (r)(\log n +\log r))\), which is asymptotically faster than the previous if \(r = \omega (\log (n) \log \log (n))\). Interestingly, a better expected performance can be achieved with a self-adjusting mutation strength that is based on the success of previous iterations. For the mutation operator that modifies a randomly chosen position, we show that the self-adjusting mutation strength yields an expected optimization time of \(\varTheta (n (\log n + \log r))\), which is best possible among all dynamic mutation strengths. In our proofs, we use a new multiplicative drift theorem for computing lower bounds, which is not restricted to processes that move only towards the target.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Following the terminology introduced in [24] and extended in [10, Section 3.1] we distinguish adaptive parameter choices into functionally-dependent and self-adjusting ones. While functionally-dependent parameter choices depend only on the current state of the algorithm, they may explicitly use absolute fitness values. Fitness- and rank-dependent mutation rates are a typical example for such functionally-dependent parameter choices. Self-adjusting parameter choices, in contrast, do not depend on absolute fitness information but rather on the success of previous iterations. This is the case of the parameter updates of the \(\mathtt{RLS} _{a,b}\) considered in this work. A typical representative of this class is the so-called one-fifth rule that is often used in evolution strategies for controlling the step size of the algorithm under consideration. Other dynamic update rules are either called deterministic—this is the case if there is no dependency between the parameters and the success or state of the optimization process other than the iteration count—or self-adapting. Self-adaptive algorithms code the parameters themselves into the genome of the individuals and hope to evolve good parameters during the optimization process.

References

  1. Auger, A., Doerr, B.: Theory of Randomized Search Heuristics. World Scientific, Singapore (2011)

    Book  MATH  Google Scholar 

  2. Auger, A., Hansen, N.: Linear convergence on positively homogeneous functions of a comparison based step-size adaptive randomized search: the (1+1) ES with generalized one-fifth success rule. CoRR (2013). arXiv:1310.8397

  3. Badkobeh, G., Lehre, P.K., Sudholt, D.: Unbiased black-box complexity of parallel search. In: Proceedings of Parallel Problem Solving from Nature (PPSN’14), Lecture Notes in Computer Science, vol. 8672, pp. 892–901. Springer (2014)

  4. Böttcher, S., Doerr, B., Neumann, F.: Optimal fixed and adaptive mutation rates for the LeadingOnes problem. In: Proceedings of Parallel Problem Solving from Nature (PPSN’10), Lecture Notes in Computer Science, vol. 6238, pp. 1–10. Springer (2010)

  5. Buzdalov, M., Doerr, B.: Runtime analysis of the \((1+(\lambda ,\lambda ))\) genetic algorithm on random satisfiable 3-CNF formulas. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’17). ACM (2017)

  6. Dang, D., Lehre, P.K.: Self-adaptation of mutation rates in non-elitist populations. In: Proceedings of Parallel Problem Solving from Nature (PPSN’16), Lecture Notes in Computer Science, vol. 9921, pp. 803–813. Springer (2016)

  7. Dietzfelbinger, M., Rowe, J.E., Wegener, I., Woelfel, P.: Tight bounds for blind search on the integers and the reals. Comb. Probab. Comput. 19, 711–728 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  8. Doerr, B.: Analyzing randomized search heuristics: tools from probability theory. In: Auger, A., Doerr, B. (eds.) Theory of Randomized Search Heuristics, pp. 1–20. World Scientific Publishing, Singapore (2011)

    Google Scholar 

  9. Doerr, B., Doerr, C.: The impact of random initialization on the runtime of randomized search heuristics. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’14), pp. 1375–1382. ACM (2014)

  10. Doerr, B., Doerr, C.: Optimal parameter choices through self-adjustment: applying the 1/5-th rule in discrete settings. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’15), pp. 1335–1342. ACM (2015)

  11. Doerr, B., Doerr, C., Ebel, F.: From black-box complexity to designing new genetic algorithms. Theor. Comput. Sci. 567, 87–104 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  12. Doerr, B., Doerr, C., Kötzing, T.: Provably optimal self-adjusting step sizes for multi-valued decision variables. In: Proceedings of Parallel Problem Solving from Nature (PPSN’16), Lecture Notes in Computer Science, vol. 9921, pp. 782–791. Springer (2016)

  13. Doerr, B., Doerr, C., Kötzing, T.: The right mutation strength for multi-valued decision variables. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’16), pp. 1115–1122. ACM (2016)

  14. Doerr, B., Doerr, C., Yang, J.: \(k\)-bit mutation with self-adjusting \(k\) outperforms standard bit mutation. In: Proceedings of Parallel Problem Solving from Nature (PPSN’16), Lecture Notes in Computer Science, vol. 9921, pp. 824–834. Springer (2016)

  15. Doerr, B., Doerr, C., Yang, J.: Optimal parameter choices via precise black-box analysis. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’16), pp. 1123–1130. ACM (2016)

  16. Doerr, B., Fouz, M., Witt, C.: Sharp bounds by probability-generating functions and variable drift. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’11), pp. 2083–2090. ACM (2011)

  17. Doerr, B., Gießen, C., Witt, C., Yang, J.: The (1+\(\lambda \)) evolutionary algorithm with self-adjusting mutation rate. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’17). ACM (2017)

  18. Doerr, B., Goldberg, L.A.: Adaptive drift analysis. Algorithmica 65, 224–250 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  19. Doerr, B., Johannsen, D.: Adjacency list matchings: an ideal genotype for cycle covers. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’07), pp. 1203–1210. ACM (2007)

  20. Doerr, B., Johannsen, D., Schmidt, M.: Runtime analysis of the (1+1) evolutionary algorithm on strings over finite alphabets. In: Proceedings of Foundations of Genetic Algorithms (FOGA’11), pp. 119–126. ACM (2011)

  21. Doerr, B., Johannsen, D., Winzen, C.: Multiplicative drift analysis. Algorithmica 64, 673–697 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  22. Doerr, B., Pohl, S.: Run-time analysis of the (1+1) evolutionary algorithm optimizing linear functions over a finite alphabet. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’12), pp. 1317–1324. ACM (2012)

  23. Droste, S., Jansen, T., Wegener, I.: Upper and lower bounds for randomized search heuristics in black-box optimization. Theory Comput. Syst. 39, 525–544 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  24. Eiben, A.E., Hinterding, R., Michalewicz, Z.: Parameter control in evolutionary algorithms. IEEE Trans. Evolut. Comput. 3, 124–141 (1999)

    Article  Google Scholar 

  25. Eiben, A.E., Smith, J.E.: Introduction to Evolutionary Computing. Springer, Berlin (2003)

    Book  MATH  Google Scholar 

  26. Gunia, C.: On the analysis of the approximation capability of simple evolutionary algorithms for scheduling problems. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’05), pp. 571–578. ACM (2005)

  27. Hansen, N., Gawelczyk, A., Ostermeier, A.: Sizing the population with respect to the local progress in (1,\( \lambda \))-evolution strategies—a theoretical analysis. In: Proceedings of IEEE Congress on Evolutionary Computation (CEC’95), pp. 80–85. IEEE (1995)

  28. He, J., Yao, X.: Drift analysis and average time complexity of evolutionary algorithms. Artif. Intell. 127, 57–85 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  29. Jägersküpper, J.: Rigorous runtime analysis of the (1+1) ES: 1/5-rule and ellipsoidal fitness landscapes. In: Proceedings of Foundations of Genetic Algorithms (FOGA’05), Lecture Notes in Computer Science, vol. 3469, pp. 260–281. Springer (2005)

  30. Jägersküpper, J.: Oblivious randomized direct search for real-parameter optimization. In: Proceedings of European Symposium on Algorithms (ESA), Lecture Notes in Computer Science, vol. 5193, pp. 553–564. Springer (2008)

  31. Jansen, T.: Analyzing Evolutionary Algorithms—The Computer Science Perspective. Springer, Berlin (2013)

    Book  MATH  Google Scholar 

  32. Jansen, T., Wegener, I.: On the analysis of a dynamic evolutionary algorithm. J. Discrete Algorithms 4, 181–199 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  33. Johannsen, D.: Random combinatorial structures and randomized search heuristics. Ph.D. thesis, Saarland University. http://scidok.sulb.uni-saarland.de/volltexte/2011/3529/ (2010)

  34. Karafotias, G., Hoogendoorn, M., Eiben, A.: Parameter control in evolutionary algorithms: trends and challenges. IEEE Trans. Evolut. Comput. 19, 167–187 (2015)

    Article  Google Scholar 

  35. Kötzing, T., Lissovoi, A., Witt, C.: (1+1) EA on generalized dynamic OneMax. In: Proceedings of Foundations of Genetic Algorithms (FOGA’15), pp. 40–51. ACM (2015)

  36. Lässig, J., Sudholt, D.: Adaptive population models for offspring populations and parallel evolutionary algorithms. In: Proceedings of Foundations of Genetic Algorithms (FOGA’11), pp. 181–192. ACM (2011)

  37. Lehre, P.K., Witt, C.: Black-box search by unbiased variation. Algorithmica 64, 623–642 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  38. Lissovoi, A., Witt, C.: MMAS vs. population-based EA on a family of dynamic fitness functions. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO’14), pp. 1399–1406. ACM (2014)

  39. Mitavskiy, B., Rowe, J., Cannings, C.: Theoretical analysis of local search strategies to optimize network communication subject to preserving the total number of links. Int. J. Intell. Comput. Cybern. 2, 243–284 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  40. Neumann, F., Witt, C.: Bioinspired Computation in Combinatorial Optimization—Algorithms and Their Computational Complexity. Springer, Berlin (2010)

    MATH  Google Scholar 

  41. Oliveto, P.S., Lehre, P.K., Neumann, F.: Theoretical analysis of rank-based mutation-combining exploration and exploitation. In: Proceedings of Congress on Evolutionary Computation (CEC’09), pp. 1455–1462. IEEE (2009)

  42. Rothlauf, F.: Representations for Genetic and Evolutionary Algorithms, 2nd edn. Springer, Berlin (2006)

    MATH  Google Scholar 

  43. Rudolph, G.: An evolutionary algorithm for integer programming. In: Proceedings of Parallel Problem Solving from Nature (PPSN’94), pp. 139–148. Springer (1994)

  44. Scharnow, J., Tinnefeld, K., Wegener, I.: The analysis of evolutionary algorithms on sorting and shortest paths problems. J. Math. Model. Algorithms 3, 349–366 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  45. Witt, C.: Tight bounds on the optimization time of a randomized search heuristic on linear functions. Comb. Probab. Comput. 22, 294–318 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  46. Zarges, C.: Rigorous runtime analysis of inversely fitness proportional mutation rates. In: Proceedings of Parallel Problem Solving from Nature (PPSN’08), Lecture Notes in Computer Science, vol. 5199, pp. 112–122. Springer (2008)

  47. Zarges, C.: On the utility of the population size for inversely fitness proportional mutation rates. In: Proceedings of Foundations of Genetic Algorithms (FOGA’09), pp. 39–46. ACM (2009)

Download references

Acknowledgements

This work was supported by a public grant as part of the Investissement d’avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH, in a joint call with Programme Gaspard Monge en Optimisation et Recherche Opérationnelle.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Timo Kötzing.

Additional information

Results presented in this work are based on [12, 13].

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Doerr, B., Doerr, C. & Kötzing, T. Static and Self-Adjusting Mutation Strengths for Multi-valued Decision Variables. Algorithmica 80, 1732–1768 (2018). https://doi.org/10.1007/s00453-017-0341-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00453-017-0341-1

Keywords

Navigation