Skip to main content
Log in

Implementation of Cartesian grids to accelerate Delaunay-based derivative-free optimization

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

This paper introduces a modification of our original Delaunay-based optimization algorithm (developed in JOGO DOI:10.1007/s10898-015-0384-2) that reduces the number of function evaluations on the boundary of feasibility as compared with the original algorithm. A weaknesses we have identified with the original algorithm is the sometimes faulty behavior of the generated uncertainty function near the boundary of feasibility, which leads to more function evaluations along the boundary of feasibility than might otherwise be necessary. To address this issue, a second search function is introduced which has improved behavior near the boundary of the search domain. Additionally, the datapoints are quantized onto a Cartesian grid, which is successively refined, over the search domain. These two modifications lead to a significant reduction of datapoints accumulating on the boundary of feasibility, and faster overall convergence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Notes

  1. Taking a and b as vectors, \(a\le b\) implies that \(a_i\le b_i\ \forall i\).

References

  1. Abramson, M.A., Audet, C., Dennis, J.E., Le Digabel, S.: OrthoMADS: a deterministic MADS instance with orthogonal directions. SIAM J. Optim. 20(2), 948–966 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  2. Alimohammadi, Shahrouz., He, Dawei.: Multi-stage algorithm for uncertainty analysis of solar power forecasting. In: Power and Energy Society General Meeting (PESGM), 2016, pp. 1-5. IEEE (2016)

  3. Alimohammadi, S., Beyhaghi, P., Bewley, T.: Delaunay-based optimization in CFD leveraging multivariate adaptive polyharmonic splines (MAPS). In: 58th AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference (2017)

  4. Audet, C., Dennis, J.E.: A pattern search filter method for nonlinear programming without derivatives. SIAM J. Optim. 14(4), 980–1010 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  5. Audet, C., Dennis, J.E.: Mesh adaptive direct search algorithms for constrained optimization. SIAM J. Optim. 17(1), 188–217 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  6. Audet, C., Dennis, J.E.: A progressive barrier for derivative-free nonlinear programming. SIAM J. Optim. 20(1), 445–472 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  7. Belitz, Paul, Bewley, Thomas: New horizons in sphere-packing theory, part II: lattice-based derivative-free optimization via global surrogates. J. Glob. Optim. 56(1), 61–91 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  8. Beyhaghi, P., Cavaglieri, D., Bewley, T.: Delaunay-based derivative-free optimization via global surrogates, part I: linear constraints. J. Glob. Optim. 63, 1–52 (2015)

    Article  MATH  Google Scholar 

  9. Beyhaghi, P., Bewley, T.: Delaunay-based Derivative-free optimization via global surrogates, part II: convex constraints. J. Glob. Optim. 2016, 1–33 (2016)

    MATH  MathSciNet  Google Scholar 

  10. Booker, A.J., Deniss, J.E., Frank, P.D., Serafini, D.B., Torczon, V., Trosset, M.W.: A Rigorous Framework for Optimization of Expensive Function by Surrogates. Springer-Verlag, Berlin (1999)

    Google Scholar 

  11. Galperin, E.A.: The cubic algorithm. J. Math. Anal. Appl. 112, 635–640 (1985)

    Article  MATH  MathSciNet  Google Scholar 

  12. Gill, P.E., Murray, W., Wright, M.H.: Practical optimization, pp. 99–104. Academic Press, London (1981)

  13. Jones, D.R., Perttunen, Cary D., Stuckman, Bruce E.: Lipschitzian optimization without the Lipschitz constant. J. Optim. Theory Appl. 79(1), 157–181 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  14. Jones, D.R.: A taxonomy of global optimization methods based on response surfaces. J. Glob. Optim. 21, 345–383 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  15. Lewis, R.M., Torczon, V., Trosset, M.W.: Direct Search Method: Then and Now, NASA/CR-2000-210125, ICASE Report No.2000-26 (2000)

  16. Wright, S., Nocedal, J.: Numerical Optimization. Springer, Berlin (1999)

    MATH  Google Scholar 

  17. Paulavicius, P., Zilinskas, J.: Simplical Optimization. Springer, Berlin (2014)

    Book  MATH  Google Scholar 

  18. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)

    MATH  Google Scholar 

  19. Schonlau, M., Welch, W.J., Jones, D.J.: A Data-Analytic Approach to Bayesian Global Optimization, Department of Statistics and Actuarial Science and The Institute for Improvement in Quality and Productivity, 1997 ASA Conference (1997)

  20. Shubert, B.O.: A sequential method seeking the global maximum of a function. SIAM J. Numer. Anal. 9(3), 379–388 (1972)

    Article  MATH  MathSciNet  Google Scholar 

  21. Torczon, V.: Multi-Directional Search, A Direct Search Algorithm for Parallel Machines. Ph.D. thesis, Rice University, Houston, TX (1989)

  22. Torczon, V.: On the convergence of pattern search algorithms. SIAM J. Optim. 7(1), 1–25 (1997)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors gratefully acknowledge AFOSR FA 9550-12-1-0046 in support of this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pooriya Beyhaghi.

Appendix: Modified algorithm for problems without target value

Appendix: Modified algorithm for problems without target value

In this appendix, we present a modified algorithm that does not required a target value for the objective function. The algorithm developed is quite similar to Algorithm 2, with the continuous and discrete search functions modified as follows:

$$\begin{aligned} s_c^k(x)= & {} p^k(x)-K^k \, e^k(x), \end{aligned}$$
(30)
$$\begin{aligned} s_d^k(x)= & {} p^k(x)-L^k\, \text {Dis}\{x,S^k_E\}. \end{aligned}$$
(31)

The parameters \(L^k\) and \(K^k\) are two positive series which are defined as follows:

$$\begin{aligned} L^k=L_0 \,\ell ^k, K^k=K_0 \,\ell ^k. \end{aligned}$$
(32)
Fig. 13
figure 13

Application of the modified algorithm discussed in the “Appendix”, without leveraging knowledge of the target value, for minimizing the Styblinski Tang test problem (24). The position of the evaluated points (squares) and support points (stars) in the \(n=2\) case are depicted in (a). Convergence histories are shown in b for \(n=3\) and c for \(n=4\). a Location of evaluated and support points, \(n=2\). b Convergence history, \(n=3\). c Convergence history, \(n=4\)

where \(\ell ^k\) is the level of the grid at step k. The convergence analysis of this modified algorithm is similar to the analysis presented in Sect. 5, with the main differences as follows:

  1. 1.

    Equation (14) is modified to:

    $$\begin{aligned} \min \left\{ s_c^k(x^*), \min _{z \in S_U^k} \left\{ s_d^k(z) \right\} \right\} \le f(x^*). \end{aligned}$$
    (33)

    Note that above equation is not true for all iterations k, but it is true once

    $$\begin{aligned} K^k \ge \hat{K} \quad \text {and} \quad L^k \ge \hat{L}; \end{aligned}$$

    note that the series \(K^k\) and \(L^k\) increase without bound, and thus (33) is satisfied for sufficiently large k.

  2. 2.

    Equation (19) is modified to

    $$\begin{aligned} \min _{z \in S^k_E} f(z)-f(x^*) \le \max \left\{ \, (L^k+ \hat{L}) \, \delta _{k}, \, (K^k+\hat{K})\, \delta _{k}^2 \right\} . \end{aligned}$$
    (34)

    Moreover, we have:

    $$\begin{aligned} \lim _{k \rightarrow \infty }L^k \delta _k= & {} \lim _{k \rightarrow \infty } L_0 \delta _0 \frac{\ell ^k}{2^{\ell ^k}}=0,\\ \lim _{k \rightarrow \infty }K^k \delta _k^2= & {} \lim _{k \rightarrow \infty } K_0 \delta _0^2 \frac{2^{\ell }}{4^{\ell }}=0. \end{aligned}$$

    As a result, the right hand size of (34) converges to zero as \(k \rightarrow \infty \).

We have implemented this modified algorithm on the problem of minimizing the Styblinski Tang test problem (24), for \(n=\{2,3,4\}\), inside the domain \(-5 \le x_i \le 5\ \forall i\), with the initial point given by \(x^0_i=0.5\ \forall i\). In these computations, the value of \(K_0=50\) and \(L_0=5\) were used; note that these parameter values happen to be good for this test problem. In general, selecting well these two parameters, which ultimately affect the convergence rate of the resulting algorithm, involves an exercise in trial and error; note, however, that (following the modified analysis described above) convergence is proved for this modification of Algorithm 2 for any choice of \(K_0\) and \(L_0\). An analogous issue was encountered when selecting K in Algorithm 1 of [8]. Figure 13 shows the positions of the function evaluations and support points for the \(n=2\) case, and the convergence histories for the \(n=3\) and \(n=4\) cases. The convergence of the modified algorithm proposed here is, again, seen to be quite rapid.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Beyhaghi, P., Bewley, T. Implementation of Cartesian grids to accelerate Delaunay-based derivative-free optimization. J Glob Optim 69, 927–949 (2017). https://doi.org/10.1007/s10898-017-0548-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-017-0548-3

Keywords

Navigation