Abstract
The Learnable Evolution Model (LEM) involves alternating periods of optimization and learning, performa extremely well on a range of problems, a specialises in achieveing good results in relatively few function evaluations. LEM implementations tend to use sophisticated learning strategies. Here we continue an exploration of alternative and simpler learning strategies, and try Entropy-based Discretization (ED), whereby, for each parameter in the search space, we infer from recent evaluated samples what seems to be a ‘good’ interval. We find that LEM(ED) provides significant advantages in both solution speed and quality over the unadorned evolutionary algorithm, and is usually superior to CMA-ES when the number of evaluations is limited. It is interesting to see such improvement gained from an easily-implemented approach. LEM(ED) can be tentatively recommended for trial on problems where good results are needed in relatively few fitness evaluations, while it is open to several routes of extension and further sophistication. Finally, results reported here are not based on a modern function optimization suite, but ongoing work confirms that our findings remain valid for non-separable functions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Hussain, F., Liu, H., Tan, C.L., Dash, M.: Discretization: An Enabling Technique. Data Mining and Knowledge Discovery 6(4), 393–423 (2002)
Quinlan, J.R.: Induction of decision trees. Machine Learning 1, 81–106 (1986)
Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo (1993)
Sheri, G., Corne, D.: The Simplest Evolution/Learning Hybrid: LEM with KNN. In: Proc. IEEE CEC 2008, pp. 3244–3251 (2008)
Bengoextea, E., Miquelez, M., Larranga, P., Lozano, J.A.: Experimental results in function optimization with EDAs in Continuous Domain. In: [12] (2002)
Edgington, E.S.: Randomization tests, 3rd edn. Marcel-Dekker, New York (1995)
Auger, A., Hansen, N.: A Restart CMA Evolution Strategy With Increasing Population Size. In: Proc. IEEE CEC 2005, pp. 1769–1776 (2005)
Hansen, N.: The CMA Evolution Strategy: A Comparing Review. In: Lozano, J.A., et al. (eds.) Towards a new evolutionary computation. Advances in estimation of distribution algorithms, pp. 75–102. Springer, Heidelberg (2006)
Goldberg, D.E.: Genetic Algorithms in Search. Optimization and Machine Learning. Addison-Wesley, Reading (1989)
Jourdan, L., Corne, D., Savic, D., Walters, G.: Hybridising rule induction and multiobjective evolutionary search for optimizing water distribution systems. In: Proc. of the 4th Hybrid Intelligent Systems conference, pp. 434–439. IEEE Computer Society Press, Los Alamitos (2005)
Kaufmann, K., Michalski, R.S.: Learning from inconsistent and noisy data. In: The AQ18 approach, 11th Int’l. Symp. on Foundations of Intelligent Systems (1999)
Larranaga, P., Lozano, J.A. (eds.): Stimulation of Distribution Algorithms: A New Tool for Evolutionary Computation. Kluwer Academic Publishers, Dordrecht (2002)
Michalski, R.S.: Learnable Evolution Model Evolutionary Processes Guided by Machine Learning. Machine Learning 38, 9–40 (2000)
Pena, J.M., Robles, V., Larranaga, P., Herves, V., Rosales, F., Perez, M.S.: GA-EDA: Hybrid evolutionary algorithm using genetic and estimation of distribution algorithms. In: Orchard, B., Yang, C., Ali, M. (eds.) IEA/AIE 2004. LNCS (LNAI), vol. 3029, pp. 361–371. Springer, Heidelberg (2004)
Syswerda, G.: Uniform Crossover in Genetic Algorithms. In: Proc. of 3rd International Conference on Genetic Algorithms. Morgan Kaufmann Publishers Inc., San Francisco (1989)
Wnek, J., Kaufmann, K., Bloedorn, E., Michalski, R.S.: Inductive Learning System AQ15c: The method and user’s guide. Reports of the Machine Learning and Inference Laboratory, MLI95-4, George Mason University,Fairfax, VA, USA (1995)
Wojtusiak, J., Michalski, R.S.: The LEM3 System for Non-Darwinian Evolutionary Computation and Its Application to Complex Function Optimization, Reports of the Machine Learning and Inference Laboratory, MLI 05-2, George Mason University, Fairfax, VA, USA (2005)
Wojtusiak, J., Michalski, R.S.: The LEM3 implementation of learnable evolution model and its testing on complex function optimization problems. In: Proc. GECCO 2006 (2006)
Zhang, Q., Sun, J., Tsang, E., Ford, J.: Hybrid estimation of distribution algorithm for global optimisation. Engineering Computations 21(1), 91–107 (2003)
Zhang, Q., Sun, J., Tsang, E., Ford, J.: Estimation of distribution algorithm with 2-opt Local Search for the Quadratic Assignment Problem. In: Lozana, Larranaga, Inza, Bengoetxea (eds.) Towards a new evolutionary computation: Advances in estimation of distribution algorithms (2006)
Suganthan, P.N., Hansen, N., Liang, J.J., Deb, K., Chen, Y.-P., Auger, A., Tiwari, A.: Problem definitions and evaluation criteria for the CEC 2005 Special Session on Real Parameter Optimization. Technical Report 2005005, Nanyang Technological University, Singapore (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Sheri, G., Corne, D.W. (2009). Evolutionary Optimization Guided by Entropy-Based Discretization. In: Giacobini, M., et al. Applications of Evolutionary Computing. EvoWorkshops 2009. Lecture Notes in Computer Science, vol 5484. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01129-0_79
Download citation
DOI: https://doi.org/10.1007/978-3-642-01129-0_79
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-01128-3
Online ISBN: 978-3-642-01129-0
eBook Packages: Computer ScienceComputer Science (R0)