Abstract
Previously a theoretical study of the average convergence rate was conducted for discrete optimisation. This paper extends it to a further analysis for continuous optimisation. First, the strategies of generating new solutions are classified into two categories: landscape-invariant and landscape-adaptive. Then, it is proven that the average convergence rate of evolutionary algorithms using positive-adaptive generators is asymptotically positive, but that of algorithms using landscape-invariant generators and zero-adaptive generators asymptotically converges to zero. A case study is made to validate the applicability of theoretical results. Besides the theoretical study, numerical simulations are presented to show the feasibility of the average convergence rate in practical applications. In case of unknown optimum, an alternative definition of the average convergence rate is also considered.
The first author was supported by the National Science Foundation of China (NSFC) under Grant No. 61303028; The second author was supported by EPSRC under Grant No. EP/I009809/1.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
We don’t add a converge order because for discrete optimisation, \(e_t\) converges linearly to 0 according to [15, Theorem 1]. For continuous optimisation, a conjecture is that \(e_t\) also converges linearly unless gradient information is used in the search.
References
Agapie, A., Agapie, M., Rudolph, G., Zbaganu, G.: Convergence of evolutionary algorithms on the \(n\)-dimensional continuous space. IEEE Trans. Cybern. 43(5), 1462–1472 (2013)
Akimoto, Y., Auger, A., Hansen, N.: Quality gain analysis of the weighted recombination evolution strategy on general convex quadratic functions. In: Proceedings of the 14th ACM/SIGEVO Conference on Foundations of Genetic Algorithms, pp. 111–126. ACM (2017)
Auger, A.: Convergence results for the (1, \(\lambda \))-sa-es using the theory of \(\phi \)-irreducible markov chains. Theoret. Comput. Sci. 334(1–3), 35–69 (2005)
Auger, A., Hansen, N.: Reconsidering the progress rate theory for evolution strategies in finite dimensions. In: Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, pp. 445–452. ACM (2006)
Auger, A., Hansen, N.: Theory of evolution strategies: a new perspective. In: Theory of Randomized Search Heuristics: Foundations and Recent Developments, pp. 289–325. World Scientific (2011)
Auger, A., Hansen, N.: Linear convergence of comparison-based step-size adaptive randomized search via stability of markov chains. SIAM J. Optim. 26(3), 1589–1624 (2016)
Beyer, H.G.: The Theory of Evolution Strategies. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-662-04378-3
Beyer, H.G., Hellwig, M.: The dynamics of cumulative step size adaptation on the ellipsoid model. Evol. Comput. 24(1), 25–57 (2016)
Beyer, H.G., Melkozerov, A.: The dynamics of self-adaptive multirecombinant evolution strategies on the general ellipsoid model. IEEE Trans. Evol. Comput. 18(5), 764–778 (2014)
Chen, Y., Zou, X., He, J.: Drift conditions for estimating the first hitting times of evolutionary algorithm. Int. J. Comput. Math. 88(1), 37–50 (2011)
Ding, L., Kang, L.: Convergence rates for a class of evolutionary algorithms with elitist strategy. Acta Mathematica Scientia 21(4), 531–540 (2001)
Doob, J.L.: Stochastic Processes. Wiley, New York (1953)
Droste, S., Jansen, T., Wegener, I.: On the analysis of the (1+1) evolutionary algorithm. Theoret. Comput. Sci. 276(1–2), 51–81 (2002)
He, J., Kang, L.: On the convergence rate of genetic algorithms. Theoret. Comput. Sci. 229(1–2), 23–39 (1999)
He, J., Lin, G.: Average convergence rate of evolutionary algorithms. IEEE Trans. Evol. Comput. 20(2), 316–321 (2016)
He, J., Yao, X.: Drift analysis and average time complexity of evolutionary algorithms. Artif. Intell. 127(1), 57–85 (2001)
He, J., Yu, X.: Conditions for the convergence of evolutionary algorithms. J. Syst. Architect. 47(7), 601–612 (2001)
He, J., Zhou, Y., Lin, G.: An initial error analysis for evolutionary algorithms. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 317–318. ACM (2017)
Huang, H., Xu, W., Zhang, Y., Lin, Z., Hao, Z.: Runtime analysis for continuous (1+1) evolutionary algorithm based on average gain model. SCIENTIA SINICA Informationis 44(6), 811–824 (2014)
Jebalia, M., Auger, A., Hansen, N.: Log-linear convergence and divergence of the scale-invariant (1+1)-es in noisy environments. Algorithmica 59(3), 425–460 (2011)
Meyn, S., Tweedie, R.: Markov Chains and Stochastic Stability. Springer, London (1993). https://doi.org/10.1007/978-1-4471-3267-7
Rudolph, G.: Local convergence rates of simple evolutionary algorithms with Cauchy mutations. IEEE Trans. Evol. Comput. 1(4), 249–258 (1997)
Rudolph, G., et al.: Convergence rates of evolutionary algorithms for a class of convex objective functions. Control Cybern. 26, 375–390 (1997)
Varga, R.: Matrix Iterative Analysis. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-05156-2
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Chen, Y., He, J. (2020). Average Convergence Rate of Evolutionary Algorithms II: Continuous Optimisation. In: Li, K., Li, W., Wang, H., Liu, Y. (eds) Artificial Intelligence Algorithms and Applications. ISICA 2019. Communications in Computer and Information Science, vol 1205. Springer, Singapore. https://doi.org/10.1007/978-981-15-5577-0_3
Download citation
DOI: https://doi.org/10.1007/978-981-15-5577-0_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-5576-3
Online ISBN: 978-981-15-5577-0
eBook Packages: Computer ScienceComputer Science (R0)