Skip to main content
Log in

Delaunay-based derivative-free optimization via global surrogates, part II: convex constraints

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

The derivative-free global optimization algorithms developed in Part I of this study, for linearly constrained problems, are extended to nonconvex n-dimensional problems with convex constraints. The initial \(n+1\) feasible datapoints are chosen in such a way that the volume of the simplex generated by these datapoints is maximized; as the algorithm proceeds, additional feasible datapoints are added in such a way that the convex hull of the available datapoints efficiently increases towards the boundaries of the feasible domain. Similar to the algorithms developed in Part I of this study, at each step of the algorithm, a search function is defined based on an interpolating function which passes through all available datapoints and a synthetic uncertainty function which characterizes the distance to the nearest datapoints. This uncertainty function, in turn, is built on the framework of a Delaunay triangulation, which is itself based on all available datapoints together with the (infeasible) vertices of an exterior simplex which completely contains the feasible domain. The search function is minimized within those simplices of this Delaunay triangulation that do not include the vertices of the exterior simplex. If the outcome of this minimization is contained within the circumsphere of a simplex which includes a vertex of the exterior simplex, this new point is projected out to the boundary of the feasible domain. For problems in which the feasible domain includes edges (due to the intersection of multiple twice-differentiable constraints), a modified search function is considered in the vicinity of these edges to assure convergence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. The representation of a convex feasible domain as stated in (1) is the standard form used, e.g., in [15], but is not completely general. Certain convex constraints of interest, such as those implied by linear matrix inequalities (LMIs), can not be represented in this form. The extension of the present algorithm to feasible domains bounded by LMIs will be considered in future work.

  2. A simplex is said to be regular if all of its edges are equal.

  3. The problem of globally maximizing the volume of a feasible simplex inside a convex domain is, in general, a nonconvex problem. We do not attempt to solve this global maximization, which is unnecessary in our algorithm.

  4. This is a maximization problem for a convex function in a convex compact domain. This problem is studied in [49].

  5. The maximum eigenvalue is denoted \(\lambda _\text {max}(.)\).

  6. If \(x\le {a}/{b}\), \(x\le {c}/{d}\), and \(b,d{>}0\) then \(x\le (a+c)/(b+d)\).

  7. If \(A,B,C{>}0\), and \(A^2 \le C+BA\) then \(A\le \sqrt{C}+B\).

  8. If \(x,y{>}0\), then \(\sqrt{x+y} \le \sqrt{x}+\sqrt{y}\).

  9. The point \(\hat{x}\) is an \(\omega \)-limit point (see [45]) of the sequence \(x_k\), if a subsequence of \(x_k\) converges to \(\hat{x}\).

  10. This is simply the classical Rosenbrock function with its unconstrained global minimizer shifted to the origin.

  11. Though this code is not available online, we have obtained a copy of it by personal communication from Prof. Marsden.

References

  1. Abramson, M.A., Audet, C., Dennis Jr., J.E., Le Digabel, S.: OrthoMADS: a deterministic MADS instance with orthogonal directions. SIAM J. Optim. 20(2), 948–966 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  2. Audet, C., Dennis Jr., J.E.: A pattern search filter method for nonlinear programming without derivatives. SIAM J. Optim. 14(4), 980–1010 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  3. Audet, C., Dennis Jr., J.E.: Mesh adaptive direct search algorithms for constrained optimization. SIAM J. Optim. 17(1), 188–217 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  4. Audet, C., Dennis Jr., J.E.: A progressive barrier for derivative-free nonlinear programming. SIAM J. Optim. 20(1), 445–472 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  5. Barvinok, A.: A Course in Convexity, vol. 50. American Mathematical Society, Providence (2002)

    MATH  Google Scholar 

  6. Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38(3), 367–426 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  7. Beyhaghi, P., Cavaglieri, D., Bewley, T.: Delaunay-based derivative-free optimization via global surrogates, part I: linear constraints. J. Glob. Optim. 63(3), 1–52 (2015)

  8. Booker, A.J., Dennis Jr., J.E., Frank, P.D., Serafini, D.B., Torczon, V., Trosset, M.W.: A rigorous framework for optimization of expensive functions by surrogates. Struct. Optim. 17(1), 1–13 (1999)

    Article  Google Scholar 

  9. Belitz, P., Bewley, T.: New horizons in sphere-packing theory, part II: lattice-based derivative-free optimization via global surrogates. J. Glob. Optim. 56(1), 61–91 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  10. Bertsekas, D.P.: On penalty and multiplier methods for constrained minimization. SIAM J. Control Optim. 14(2), 216–235 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  11. Bertsimas, D., Tsitsiklis, J.N.: Introduction to Linear Optimization, vol. 6. Athena Scientific, Belmont (1997)

    Google Scholar 

  12. Boissonnat, J.D., Devillers, O., Hornus, S.: Incremental construction of the Delaunay triangulation and the Delaunay graph in medium dimension. In: Proceedings of the Twenty-Fifth Annual Symposium on Computational Geometry, pp. 208–216. ACM (2009)

  13. Coxeter, H.S.M.: Regular Polytopes. Courier Corporation, Chelmsford (1973)

    MATH  Google Scholar 

  14. Combettes, P.L.: Hilbertian convex feasibility problem: convergence of projection methods. Appl. Math. Optim. 35(3), 311–330 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  15. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    Book  MATH  Google Scholar 

  16. Dwyer, R.A.: A faster divide-and-conquer algorithm for constructing Delaunay triangulations. Algorithmica 2(1–4), 137–151 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  17. Dwyer, R.A.: Higher-dimensional Voronoi diagrams in linear expected time. Discrete Comput. Geom. 6(3), 343–367 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  18. Dwyer, R.A.: The expected number of k-faces of a Voronoi diagram. Comput. Math. Appl. 26(5), 13–19 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  19. Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2007)

    MATH  Google Scholar 

  20. Floudas, C.A., Pardalos, P.M.: A Collection of Test Problems for Constrained Global Optimization Algorithms, vol. 455. Springer, Berlin (1990)

    MATH  Google Scholar 

  21. Forsgren, A., Gill, P.E., Wright, M.H.: Interior methods for nonlinear optimization. SIAM Rev. 44(4), 525–597 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  22. Forsgren, A., Gill, P.E.: Primal-dual interior methods for nonconvex nonlinear programming. SIAM J. Optim. 8(4), 1132–1152 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  23. Gill, P.E., Murray, W.: Newton-type methods for unconstrained and linearly constrained optimization. Math. Program. 7(1), 311–350 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  24. Gill, P.E., Murray, W., Saunders, M.A., Wright, M.H.: A Schur-Complement Method for Sparse Quadratic Programming (No. SOL-87-12). Stanford University CA Systems Optimization Lab (1987)

  25. Gill, P.E., Murray, W., Saunders, M.A., Tomlin, J.A., Wright, M.H.: On projected Newton barrier methods for linear programming and an equivalence to Karmarkars projective method. Math. Program. 36(2), 183–209 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  26. Heinonen, J.: Lectures on Lipschitz analysis. University Report, University of Jyvaskyla Department of Mathematics and Statistics, 100, University of Jyvaskyla, Jyvaskyla, pp. 1–77 (2005)

  27. Horst, R., Pardalos, P.M. (eds.): Handbook of Global Optimization, vol. 2. Springer, Berlin (2013)

    Google Scholar 

  28. Jones, D.R., Perttunen, C.D., Stuckman, B.E.: Lipschitzian optimization without the Lipschitz constant. J. Optim. Theory Appl. 79(1), 157–181 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  29. Jones, D.R.: A taxonomy of global optimization methods based on response surfaces. J. Glob. Optim. 21(4), 345–383 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  30. Kort, B.W., Bertsekas, D.P.: A new penalty function method for constrained minimization. In: Proceedings of the 1972 IEEE Conference on Decision and Control, 1972 and 11th Symposium on Adaptive Processes, pp. 162–166. IEEE (1972)

  31. Kort, B.W., Bertsekas, D.P.: Combined primal-dual and penalty methods for convex programming. SIAM J. Control Optim. 14(2), 268–294 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  32. Le Digabel, S.: Algorithm 909: NOMAD: nonlinear optimization with the MADS algorithm. ACM Trans. Math. Softw. 37(4), 44 (2011)

    Article  MathSciNet  Google Scholar 

  33. Lewis, R.M., Torczon, V.: Pattern search algorithms for bound constrained minimization. SIAM J. Optim. 9(4), 1082–1099 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  34. Lewis, R.M., Torczon, V.: Pattern search methods for linearly constrained minimization. SIAM J. Optim. 10(3), 917–941 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  35. Lewis, R.M., Torczon, V.: A globally convergent augmented Lagrangian pattern search algorithm for optimization with general constraints and simple bounds. SIAM J. Optim. 12(4), 1075–1089 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  36. Mangasarian, O.L., Fromovitz, S.: The Fritz John necessary optimality conditions in the presence of equality and inequality constraints. J. Math. Anal. Appl. 17(1), 37–47 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  37. Marsden, A.L., Wang, M., Dennis Jr., J.E., Moin, P.: Optimal aeroacoustic shape design using the surrogate management framework. Optim. Eng. 5(2), 235–262 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  38. McMullen, P.: The maximum numbers of faces of a convex polytope. Mathematika 17(02), 179–184 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  39. Nocedal, J., Wright, S.: Numerical Optimization. Springer, New York (2006)

    MATH  Google Scholar 

  40. Paulavicius, R., Zilinskas, J.: Simplicial Global Optimization. Springer, New York (2014)

    Book  MATH  Google Scholar 

  41. Preparata, F.P., Shamos, M.: Computational Geometry: An Introduction. Springer, Berlin (2012)

    MATH  Google Scholar 

  42. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)

    MATH  Google Scholar 

  43. Schonlau, M., Welch, W.J., Jones, D.R.: A data-analytic approach to bayesian global optimization. In: Department of Statistics and Actuarial Science and The Institute for Improvement in Quality and Productivity, ASA Conference (1997)

  44. Shubert, B.O.: A sequential method seeking the global maximum of a function. SIAM J. Numer. Anal. 9(3), 379–388 (1972)

    Article  MathSciNet  MATH  Google Scholar 

  45. Smith, H.L., Waltman, P.: The Theory of the Chemostat: Dynamics of Microbial Competition, vol. 13. Cambridge University Press, Cambridge (1995)

    Book  MATH  Google Scholar 

  46. Susca, S., Bullo, F., Martnez, S.: Gradient algorithms for polygonal approximation of convex contours. Automatica 45(2), 510–516 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  47. Torczon, V.: On the convergence of pattern search algorithms. SIAM J. Optim. 7(1), 1–25 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  48. Zhigljavsky, A., Zilinskas, A.: Stochastic Global Optimization, vol. 9. Springer, Berlin (2007)

    MATH  Google Scholar 

  49. Zwart, P.B.: Global maximization of a convex function with linear inequality constraints. Oper. Res. 22(3), 602–609 (1974)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The authors gratefully acknowledge Prof. Phillip Gill and Prof. Alison Marsden for their collaborations and funding from AFOSR FA 9550-12-1-0046, Cymer Center for Control Systems & Dynamics, and Leidos corporation in support of this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pooriya Beyhaghi.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Beyhaghi, P., Bewley, T.R. Delaunay-based derivative-free optimization via global surrogates, part II: convex constraints. J Glob Optim 66, 383–415 (2016). https://doi.org/10.1007/s10898-016-0433-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-016-0433-5

Keywords

Navigation