Abstract
Many Estimation–of–Distribution Algorithms use maximum-likelihood (ML) estimates. For discrete variables this has met with great success. For continuous variables the use of ML estimates for the normal distribution does not directly lead to successful optimization in most landscapes. It was previously found that an important reason for this is the premature shrinking of the variance at an exponential rate. Remedies were subsequently successfully formulated (i.e. Adaptive Variance Scaling (AVS) and Standard–Deviation Ratio triggering (SDR)). Here we focus on a second source of inefficiency that is not removed by existing remedies. We then provide a simple, but effective technique called Anticipated Mean Shift (AMS) that removes this inefficiency.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Beyer, H.-G., Deb, K.: On self–adaptive features in real–parameter evolutionary algorithms. IEEE Transactions on Evolutionary Computation 5(3), 250–270 (2001)
Bosman, P.A.N., Grahl, J., Rothlauf, F.: SDR: A better trigger for adaptive variance scaling in normal EDAs. In: Thierens, D., et al. (eds.) Proc. of the Genetic and Evol. Comp. Conf. — GECCO–2007, pp. 492–499. ACM Press, New York (2007)
Bosman, P.A.N., Grahl, J., Thierens, D.: Adapted maximum–likelihood Gaussian models for numerical optimization with continuous EDAs. CWI technical report SEN–E0704 (2007)
Bosman, P.A.N., Thierens, D.: Expanding from discrete to continuous estimation of distribution algorithms: The IDEA. In: Deb, K., Rudolph, G., Lutton, E., Merelo, J.J., Schoenauer, M., Schwefel, H.-P., Yao, X. (eds.) PPSN 2000. LNCS, vol. 1917, pp. 767–776. Springer, Heidelberg (2000)
Fogel, D.-B., Beyer, H.-G.: A note on the empirical evaluation of intermediate recombination. Evolutionary Computation 3(4), 491–495 (1996)
Gallagher, M., Frean, M.: Population–based continuous optimization, probabilistic modelling and mean shift. Evolutionary Computation 13(1), 29–42 (2005)
González, C., Lozano, J.A., Larrañaga, P.: Mathematical modelling of UMDAc algorithm with tournament selection. Behaviour on linear and quadratic functions 31(3), 313–340 (2002)
Grahl, J., Bosman, P.A.N., Rothlauf, F.: The correlation–triggered adaptive variance scaling IDEA. In: Keijzer, M., et al. (eds.) Proc. of the Genetic and Evol. Comp. Conf. — GECCO–2006, pp. 397–404. ACM Press, New York (2006)
Grahl, J., Minner, S., Rothlauf, F.: Behaviour of UMDAc with truncation selection on monotonous functions. In: Corne, D., et al. (eds.) Proceedings of the IEEE Congress on Evol. Comp. — CEC–2005, pp. 2553–2559. IEEE Computer Society Press, Los Alamitos (2005)
Hansen, N., Müller, S.D., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA–ES). Evolutionary Computation 11(1), 1–18 (2003)
Larrañaga, P., Etxeberria, R., Lozano, J.A., Peña, J.: Optimization in continuous domains by learning and simulation of Gaussian networks. In: Pelikan, M., et al. (eds.) Proc. of the OBUPM Workshop at the Genetic and Evol. Comp. Conf. — GECCO–2000, pp. 201–204. Morgan Kaufmann, San Francisco (2000)
Lauritzen, S.L.: Graphical Models. Clarendon Press (1996)
Lozano, J.A., Larrañaga, P., Inza, I., Bengoetxea, E.: Towards a New Evolutionary Computation. Advances in Estimation of Distribution Algorithms (2006)
Mühlenbein, H., Höns, R.: The estimation of distributions and the minimum relative entropy principle. Evolutionary Computation 13(1), 1–27 (2005)
Ocenasek, J., Kern, S., Hansen, N., Müller, S., Koumoutsakos, P.: A mixed Bayesian optimization algorithm with variance adaptation. In: Yao, X., Burke, E.K., Lozano, J.A., Smith, J., Merelo-Guervós, J.J., Bullinaria, J.A., Rowe, J.E., Tiňo, P., Kabán, A., Schwefel, H.-P. (eds.) PPSN 2004. LNCS, vol. 3242, pp. 352–361. Springer, Heidelberg (2004)
Pelikan, M., Sastry, K., Cantú-Paz, E.: Scalable Optimization via Probabilistic Modeling: From Algorithms to Applications. Springer, Heidelberg (2006)
Rudlof, S., Köppen, M.: Stochastic hill climbing with learning by vectors of normal distributions. In: Furuhashi, T. (ed.) Proceedings of the First Online Workshop on Soft Computing — WSC1, pp. 60–70. Nagoya Univ. (1996)
Sebag, M., Ducoulombier, A.: Extending population-based incremental learning to continuous search spaces. In: Eiben, A.E., Bäck, T., Schoenauer, M., Schwefel, H.-P. (eds.) PPSN 1998. LNCS, vol. 1498, pp. 418–427. Springer, Heidelberg (1998)
Yunpeng, C., Xiaomin, S., Hua, X., Peifa, J.: Cross entropy and adaptive variance scaling in continuous EDA. In: Thierens, D., et al. (eds.) Proc. of the Genetic and Evol. Comp. Conf. — GECCO–2007, pp. 609–616. ACM Press, New York (2007)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bosman, P.A.N., Grahl, J., Thierens, D. (2008). Enhancing the Performance of Maximum–Likelihood Gaussian EDAs Using Anticipated Mean Shift. In: Rudolph, G., Jansen, T., Beume, N., Lucas, S., Poloni, C. (eds) Parallel Problem Solving from Nature – PPSN X. PPSN 2008. Lecture Notes in Computer Science, vol 5199. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87700-4_14
Download citation
DOI: https://doi.org/10.1007/978-3-540-87700-4_14
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-87699-1
Online ISBN: 978-3-540-87700-4
eBook Packages: Computer ScienceComputer Science (R0)