Abstract
The focus is on black-box optimization of a function given as a black box, i.e. an oracle for f-evaluations. This is commonly called direct search, and in fact, most methods for direct search are heuristics. Theoretical results on the performance/behavior of such heuristics are still rare. One reason: Like classical optimization algorithms, also direct-search methods face the challenge of step-size control, and usually, the more sophisticated the step-size control, the harder the analysis. Obviously, when we want the search to actually converge to a stationary point (i.e., the distance from this point tends to zero) at a nearly constant rate, then step sizes must be adapted. In practice, however, obtaining an ε-approximation for a given ε> 0 is often sufficient, and usually all N parameters are bounded, so that the maximum distance from the optimum is bounded. Thus, in such cases reasonable step sizes lie in a predetermined bounded interval. Considering the minimization of the distance from a fixed point as the objective, we address the question, for randomized heuristics that use isotropic sampling to generate new candidate solutions, whether we might get rid of step-size control – namely of the problems connected to it, like so-called premature convergence – by choosing step sizes randomly according to some properly predefined distribution over this interval. As this choice of step sizes is oblivious to the course of the optimization, we gain robustness against a loss of step-size control. Naturally, the question is: What is the price w.r.t. local convergence speed? As we shall see, merely a factor of order ln (d/ε), where d is the diameter of the the decision space, an N-dimensional interval region.
Keywords
- Isotropic Distribution
- Covariance Matrix Adaptation Evolution Strategy
- Isotropic Sampling
- Elitist Selection
- Randomize Search Heuristic
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Wegener, I.: Towards a theory of randomized search heuristics. In: Rovan, B., Vojtáš, P. (eds.) MFCS 2003. LNCS, vol. 2747, pp. 125–141. Springer, Heidelberg (2003)
Kolda, T.G., Lewis, R.M., Torczon, V.: Optimization by direct search: New perspectives on some classical and modern methods. SIAM Review 45(3), 385–482 (2004)
Rechenberg, I.: Cybernetic solution path of an experimental problem. Royal Aircraft Establishment (1965)
Brooks, S.H.: A discussion of random methods for seeking maxima. Operations Research 6(2), 244–251 (1958)
Wegener, I.: Randomized search heuristics as an alternative to exact optimization. In: Lenski, W. (ed.) Logic versus Approximation. LNCS, vol. 3075, pp. 138–149. Springer, Heidelberg (2004)
Fang, K.T., Kotz, S., Ng, K.W.: Symmetric multivariate and related distributions. Monographs on statistics and applied probability, vol. 36. Chapman & Hall, London (1990)
Jägersküpper, J.: Analysis of a simple evolutionary algorithm for minimization in Euclidean spaces. In: Baeten, J.C.M., Lenstra, J.K., Parrow, J., Woeginger, G.J. (eds.) ICALP 2003. LNCS, vol. 2719, pp. 1068–1079. Springer, Heidelberg (2003)
Jägersküpper, J.: Algorithmic analysis of a basic evolutionary algorithm for continuous optimization. Theoretical Computer Science 379(3), 329–347 (2007)
Jägersküpper, J.: Lower bounds for hit-and-run direct search. In: Hromkovič, J., Královič, R., Nunkesser, M., Widmayer, P. (eds.) SAGA 2007. LNCS, vol. 4665, pp. 118–129. Springer, Heidelberg (2007)
Rowe, J.E., Hidovic, D.: An evolution strategy using a continuous version of the Gray-code neighbourhood distribution. In: Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3102, pp. 725–736. Springer, Heidelberg (2004)
Dietzfelbinger, M., Rowe, J.E., Wegener, I., Woelfel, P.: Tight bounds for blind search on the integers. In: Proc. 25th Annual Symposium on Theoretical Aspects of Computer Science (STACS), IBFI Schloss Dagstuhl, Germany. Dagstuhl Seminar Proceedings, vol. 8001, pp. 241–252 (2008)
Jägersküpper, J.: Lower bounds for randomized direct search with isotropic sampling. Operations Research Letters 36(3), 327–332 (2008)
Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation 9(2), 159–195 (2001)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Jägersküpper, J. (2008). Oblivious Randomized Direct Search for Real-Parameter Optimization. In: Halperin, D., Mehlhorn, K. (eds) Algorithms - ESA 2008. ESA 2008. Lecture Notes in Computer Science, vol 5193. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87744-8_46
Download citation
DOI: https://doi.org/10.1007/978-3-540-87744-8_46
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-87743-1
Online ISBN: 978-3-540-87744-8
eBook Packages: Computer ScienceComputer Science (R0)