Abstract
This paper introduces a modification of our original Delaunay-based optimization algorithm (developed in JOGO DOI:10.1007/s10898-015-0384-2) that reduces the number of function evaluations on the boundary of feasibility as compared with the original algorithm. A weaknesses we have identified with the original algorithm is the sometimes faulty behavior of the generated uncertainty function near the boundary of feasibility, which leads to more function evaluations along the boundary of feasibility than might otherwise be necessary. To address this issue, a second search function is introduced which has improved behavior near the boundary of the search domain. Additionally, the datapoints are quantized onto a Cartesian grid, which is successively refined, over the search domain. These two modifications lead to a significant reduction of datapoints accumulating on the boundary of feasibility, and faster overall convergence.
Similar content being viewed by others
Notes
Taking a and b as vectors, \(a\le b\) implies that \(a_i\le b_i\ \forall i\).
References
Abramson, M.A., Audet, C., Dennis, J.E., Le Digabel, S.: OrthoMADS: a deterministic MADS instance with orthogonal directions. SIAM J. Optim. 20(2), 948–966 (2009)
Alimohammadi, Shahrouz., He, Dawei.: Multi-stage algorithm for uncertainty analysis of solar power forecasting. In: Power and Energy Society General Meeting (PESGM), 2016, pp. 1-5. IEEE (2016)
Alimohammadi, S., Beyhaghi, P., Bewley, T.: Delaunay-based optimization in CFD leveraging multivariate adaptive polyharmonic splines (MAPS). In: 58th AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference (2017)
Audet, C., Dennis, J.E.: A pattern search filter method for nonlinear programming without derivatives. SIAM J. Optim. 14(4), 980–1010 (2004)
Audet, C., Dennis, J.E.: Mesh adaptive direct search algorithms for constrained optimization. SIAM J. Optim. 17(1), 188–217 (2006)
Audet, C., Dennis, J.E.: A progressive barrier for derivative-free nonlinear programming. SIAM J. Optim. 20(1), 445–472 (2009)
Belitz, Paul, Bewley, Thomas: New horizons in sphere-packing theory, part II: lattice-based derivative-free optimization via global surrogates. J. Glob. Optim. 56(1), 61–91 (2013)
Beyhaghi, P., Cavaglieri, D., Bewley, T.: Delaunay-based derivative-free optimization via global surrogates, part I: linear constraints. J. Glob. Optim. 63, 1–52 (2015)
Beyhaghi, P., Bewley, T.: Delaunay-based Derivative-free optimization via global surrogates, part II: convex constraints. J. Glob. Optim. 2016, 1–33 (2016)
Booker, A.J., Deniss, J.E., Frank, P.D., Serafini, D.B., Torczon, V., Trosset, M.W.: A Rigorous Framework for Optimization of Expensive Function by Surrogates. Springer-Verlag, Berlin (1999)
Galperin, E.A.: The cubic algorithm. J. Math. Anal. Appl. 112, 635–640 (1985)
Gill, P.E., Murray, W., Wright, M.H.: Practical optimization, pp. 99–104. Academic Press, London (1981)
Jones, D.R., Perttunen, Cary D., Stuckman, Bruce E.: Lipschitzian optimization without the Lipschitz constant. J. Optim. Theory Appl. 79(1), 157–181 (1993)
Jones, D.R.: A taxonomy of global optimization methods based on response surfaces. J. Glob. Optim. 21, 345–383 (2001)
Lewis, R.M., Torczon, V., Trosset, M.W.: Direct Search Method: Then and Now, NASA/CR-2000-210125, ICASE Report No.2000-26 (2000)
Wright, S., Nocedal, J.: Numerical Optimization. Springer, Berlin (1999)
Paulavicius, P., Zilinskas, J.: Simplical Optimization. Springer, Berlin (2014)
Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)
Schonlau, M., Welch, W.J., Jones, D.J.: A Data-Analytic Approach to Bayesian Global Optimization, Department of Statistics and Actuarial Science and The Institute for Improvement in Quality and Productivity, 1997 ASA Conference (1997)
Shubert, B.O.: A sequential method seeking the global maximum of a function. SIAM J. Numer. Anal. 9(3), 379–388 (1972)
Torczon, V.: Multi-Directional Search, A Direct Search Algorithm for Parallel Machines. Ph.D. thesis, Rice University, Houston, TX (1989)
Torczon, V.: On the convergence of pattern search algorithms. SIAM J. Optim. 7(1), 1–25 (1997)
Acknowledgements
The authors gratefully acknowledge AFOSR FA 9550-12-1-0046 in support of this work.
Author information
Authors and Affiliations
Corresponding author
Appendix: Modified algorithm for problems without target value
Appendix: Modified algorithm for problems without target value
In this appendix, we present a modified algorithm that does not required a target value for the objective function. The algorithm developed is quite similar to Algorithm 2, with the continuous and discrete search functions modified as follows:
The parameters \(L^k\) and \(K^k\) are two positive series which are defined as follows:
where \(\ell ^k\) is the level of the grid at step k. The convergence analysis of this modified algorithm is similar to the analysis presented in Sect. 5, with the main differences as follows:
-
1.
Equation (14) is modified to:
$$\begin{aligned} \min \left\{ s_c^k(x^*), \min _{z \in S_U^k} \left\{ s_d^k(z) \right\} \right\} \le f(x^*). \end{aligned}$$(33)Note that above equation is not true for all iterations k, but it is true once
$$\begin{aligned} K^k \ge \hat{K} \quad \text {and} \quad L^k \ge \hat{L}; \end{aligned}$$note that the series \(K^k\) and \(L^k\) increase without bound, and thus (33) is satisfied for sufficiently large k.
-
2.
Equation (19) is modified to
$$\begin{aligned} \min _{z \in S^k_E} f(z)-f(x^*) \le \max \left\{ \, (L^k+ \hat{L}) \, \delta _{k}, \, (K^k+\hat{K})\, \delta _{k}^2 \right\} . \end{aligned}$$(34)Moreover, we have:
$$\begin{aligned} \lim _{k \rightarrow \infty }L^k \delta _k= & {} \lim _{k \rightarrow \infty } L_0 \delta _0 \frac{\ell ^k}{2^{\ell ^k}}=0,\\ \lim _{k \rightarrow \infty }K^k \delta _k^2= & {} \lim _{k \rightarrow \infty } K_0 \delta _0^2 \frac{2^{\ell }}{4^{\ell }}=0. \end{aligned}$$As a result, the right hand size of (34) converges to zero as \(k \rightarrow \infty \).
We have implemented this modified algorithm on the problem of minimizing the Styblinski Tang test problem (24), for \(n=\{2,3,4\}\), inside the domain \(-5 \le x_i \le 5\ \forall i\), with the initial point given by \(x^0_i=0.5\ \forall i\). In these computations, the value of \(K_0=50\) and \(L_0=5\) were used; note that these parameter values happen to be good for this test problem. In general, selecting well these two parameters, which ultimately affect the convergence rate of the resulting algorithm, involves an exercise in trial and error; note, however, that (following the modified analysis described above) convergence is proved for this modification of Algorithm 2 for any choice of \(K_0\) and \(L_0\). An analogous issue was encountered when selecting K in Algorithm 1 of [8]. Figure 13 shows the positions of the function evaluations and support points for the \(n=2\) case, and the convergence histories for the \(n=3\) and \(n=4\) cases. The convergence of the modified algorithm proposed here is, again, seen to be quite rapid.
Rights and permissions
About this article
Cite this article
Beyhaghi, P., Bewley, T. Implementation of Cartesian grids to accelerate Delaunay-based derivative-free optimization. J Glob Optim 69, 927–949 (2017). https://doi.org/10.1007/s10898-017-0548-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10898-017-0548-3