Skip to main content
Log in

Order-based error for managing ensembles of surrogates in mesh adaptive direct search

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

We investigate surrogate-assisted strategies for global derivative-free optimization using the mesh adaptive direct search (MADS) blackbox optimization algorithm. In particular, we build an ensemble of surrogate models to be used within the search step of MADS to perform global exploration, and examine different methods for selecting the best model for a given problem at hand. To do so, we introduce an order-based error tailored to surrogate-based search. We report computational experiments for ten analytical benchmark problems and three engineering design applications. Results demonstrate that different metrics may result in different model choices and that the use of order-based metrics improves performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. There is an error in the objective function formulation of the SNAKE problem in [65].

References

  1. Abramson, M.A., Audet, C., Dennis Jr., J.E., Le Digabel, S.: OrthoMADS: a deterministic MADS instance with orthogonal directions. SIAM J. Optim. 20(2), 948–966 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  2. Acar, E., Rais-Rohani, M.: Ensemble of metamodels with optimized weight factors. Struct. Multidiscipl. Optim. 37(3), 279–294 (2009)

    Article  Google Scholar 

  3. Agte, J.S., Sobieszczanski-Sobieski, J., Sandusky, R.R.J.: Supersonic business jet design through bilevel integrated system synthesis. In: Proceedings of the World Aviation Conference, Volume SAE Paper No. 1999-01-5622, San Francisco, CA, 1999. MCB University Press, Bradford

  4. Alexandrov, N.M., Dennis Jr., J.E., Lewis, R.M., Torczon, V.: A trust-region framework for managing the use of approximation models in optimization. Struct. Multidiscipl. Optim. 15(1), 16–23 (1998)

    Article  Google Scholar 

  5. Allen, D.M.: The relationship between variable selection and data agumentation and a method for prediction. Technometrics 16(1), 125–127 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  6. Audet, C.: A survey on direct search methods for blackbox optimization and their applications. In: Pardalos, P.M., Rassias, T.M. (eds.) Mathematics Without Boundaries: Surveys in Interdisciplinary Research, chapter 2, pp. 31–56. Springer, Berlin (2014)

    Google Scholar 

  7. Audet, C., Béchard, V., Le Digabel, S.: Nonsmooth optimization through mesh adaptive direct search and variable neighborhood search. J. Glob. Optim. 41(2), 299–318 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  8. Audet, C., Booker, A.J., Dennis Jr., J.E., Frank, P.D., Moore, D.W.: A surrogate-model-based method for constrained optimization. Presented at the 8th AIAA/ISSMO symposium on multidisciplinary analysis and optimization, 2000

  9. Audet, C., Dennis Jr., J.E.: Mesh adaptive direct search algorithms for constrained optimization. SIAM J. Optim. 17(1), 188–217 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  10. Audet, C., Dennis Jr., J.E.: A progressive barrier for derivative-free nonlinear programming. SIAM J. Optim. 20(1), 445–472 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  11. Bauer, F., Garabedian, P., Korn, D.: Supercritical Wing Section, vol. III. Springer, Berlin (1977)

    Book  MATH  Google Scholar 

  12. Booker, A.J., Dennis Jr., J.E., Frank, P.D., Serafini, D.B., Torczon, V., Trosset, M.W.: A rigorous framework for optimization of expensive functions by surrogates. Struct. Multidiscipl. Optim. 17(1), 1–13 (1999)

    Article  Google Scholar 

  13. Boukouvalaa, F., Floudas, C.A.: ARGONAUT: AlgoRithms for Global Optimization of coNstrAined grey-box compUTational problems. Optim. Lett. 11(5), 895–913 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  14. Boukouvalaa, F., Hasan, M.M.F., Floudas, C.A.: Global optimization of general constrained grey-box models: new method and its application to constrained PDEs for pressure swing adsorption. J. Glob. Optim. 67(1), 3–42 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  15. Buhmann, M.D.: Radial Basis Functions: Theory and Implementations. Cambridge Monographs on Applied and Computational Mathematics. Cambridge University Press, Cambridge (2003)

    Book  Google Scholar 

  16. Chen, Y.-C., Wei, C., Yeh, H.-C.: Rainfall network design using kriging and entropy. Hydrol. Process. 22(3), 340–346 (2008)

    Article  Google Scholar 

  17. Chowdhury, S., Mehmani, A., Zhang, J., Messac, A.: Quantifying regional error in surrogates by modeling its relationship with sample density. In: Structures, Structural Dynamics, and Materials and Co-located Conferences. American Institute of Aeronautics and Astronautics (2013)

  18. Colson, B.: Trust-region algorithms for derivative-free optimization and nonlinear bilevel programming. Ph.D. thesis, Département de Mathématique, FUNDP, Namur, Belgium (2003)

  19. Conn, A.R., Le Digabel, S.: Use of quadratic models with mesh-adaptive direct search for constrained black box optimization. Optim. Methods Softw. 28(1), 139–158 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  20. Conn, A.R., Scheinberg, K., Vicente, L.N.: Introduction to Derivative-Free Optimization. MOS-SIAM Series on Optimization. SIAM, Philadelphia (2009)

    Book  MATH  Google Scholar 

  21. Conn, A.R., Toint, P.L.: An algorithm using quadratic interpolation for unconstrained derivative free optimization. In: Di Pillo, G., Gianessi, F. (eds.) Nonlinear Optimization and Applications, pp. 27–47. Plenum Publishing, New York (1996)

    Chapter  Google Scholar 

  22. Custódio, A.L., Rocha, H., Vicente, L.N.: Incorporating minimum Frobenius norm models in direct search. Comput. Optim. Appl. 46(2), 265–278 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  23. Dhar, A., Datta, B.: Global optimal design of ground water monitoring network using embedded kriging. Ground Water 47(6), 806–815 (2009)

    Article  Google Scholar 

  24. Eaves, B.C.: On quadratic programming. Manag. Sci. 17(11), 698–711 (1971)

    Article  MATH  Google Scholar 

  25. Efron, B.: Estimating the error rate of a prediction rule: improvement on cross-validation. J. Am. Stat. Assoc. 78(382), 316–331 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  26. Fletcher, R., Gould, N.I.M., Leyffer, S., Toint, P.L., Wächter, A.: On the global convergence of trust-region SQP-filter algorithms for general nonlinear programming. SIAM J. Optim. 13(3), 635–659 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  27. Fowler, K.R., Reese, J.P., Kees, C.E., Dennis Jr., J.E., Kelley, C.T., Miller, C.T., Audet, C., Booker, A.J., Couture, G., Darwin, R.W., Farthing, M.W., Finkel, D.E., Gablonsky, J.M., Gray, G., Kolda, T.G.: Comparison of derivative-free optimization methods for groundwater supply and hydraulic capture community problems. Adv. Water Resour. 31(5), 743–757 (2008)

    Article  Google Scholar 

  28. Gergonne, J.D.: The application of the method of least squares to the interpolation of sequences. Hist. Math. 1(4), 439–447 (1974)

    Article  MathSciNet  Google Scholar 

  29. Goel, T., Haftka, R.T., Shyy, W., Queipo, N.V.: Ensemble of surrogates. Struct. Multidiscipl. Optim. 33(3), 199–216 (2007)

    Article  Google Scholar 

  30. Goel, T., Stander, N.: Comparing three error criteria for selecting radial basis function network topology. Comput. Methods Appl. Mech. Eng. 198(27–29), 2137–2150 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  31. Goldberg, D.E.: Genetic Algorithms in Search. Optimization and Machine Learning. Addison-Wesley Longman, Boston (1989)

    MATH  Google Scholar 

  32. Gramacy, R.B., Gray, G.A., Le Digabel, S., Lee, H.K.H., Ranjan, P., Wells, G., Wild, S.M.: Modeling an augmented lagrangian for blackbox constrained optimization. Technometrics 58(1), 1–11 (2016)

    Article  MathSciNet  Google Scholar 

  33. Gramacy, R.B., Le Digabel, S.: The mesh adaptive direct search algorithm with treed Gaussian process surrogates. Pac. J. Optim. 11(3), 419–447 (2015)

    MathSciNet  MATH  Google Scholar 

  34. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer Series in Statistics. Springer, New York (2001)

    MATH  Google Scholar 

  35. Hock, W., Schittkowski, K.: Test Examples for Nonlinear Programming Codes. Lecture Notes in Economics and Mathematical Systems, vol. 187. Springer, Berlin (1981)

    Book  MATH  Google Scholar 

  36. Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black box functions. J. Glob. Optim. 13(4), 455–492 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  37. Kannan, A., Wild, S.M.: Benefits of deeper analysis in simulation-based groundwater optimization problems. In: Proceedings of the XIX International Conference on Computational Methods in Water Resources (CMWR 2012), June 2012

  38. Kitayama, S., Arakawa, M., Yamazaki, K.: Sequential approximate optimization using radial basis function network for engineering optimization. Optim. Eng. 12(4), 535–557 (2011)

    Article  MATH  Google Scholar 

  39. Kodiyalam, S.: Multidisciplinary aerospace systems optimization. Technical Report NASA/CR-2001-211053, Lockheed Martin Space Systems Company, Computational AeroSciences Project, Sunnyvale, CA (2001)

  40. Kroo, I.: Aircraft design: synthesis and analysis; Cruise performance and range. http://adg.stanford.edu/aa241/performance/cruise.html (2005)

  41. Le Digabel, S.: Algorithm 909: NOMAD: nonlinear optimization with the MADS algorithm. ACM Trans. Math. Softw. 37(4), 44:1–44:15 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  42. Le Thi, H.A., Vaz, A.I.F., Vicente, L.N.: Optimizing radial basis functions by D.C. programming and its use in direct search for global derivative-free optimization. TOP 20(1), 190–214 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  43. Lim, D., Ong, Y.-S., Jin, Y., Sendhoff, B.: A study on metamodeling techniques, ensembles, and multi-surrogates in evolutionary computation. In: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, GECCO ’07, pp. 1288–1295. ACM, New York (2007)

  44. Lukšan, L., Vlček, J.: Test problems for nonsmooth unconstrained and linearly constrained optimization. Technical Report V-798, ICS AS CR, 2000

  45. Mack, Y., Goel, T., Shyy, W., Haftka, R.T., Queipo, N.V.: Multiple surrogates for the shape optimization of bluff body-facilitated mixing. In: 43rd AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada, 2005. AIAA. Paper AIAA-2005-0333

  46. Mardia, K.V., Kent, J.T., Bibby, J.M.: Multivariate Analysis. Probability and mathematical statistics. Academic Press, New York (1979)

    MATH  Google Scholar 

  47. Matott, L.S., Leung, K., Sim, J.: Application of MATLAB and Python optimizers to two case studies involving groundwater flow and contaminant transport modeling. Comput. Geosci. 37(11), 1894–1899 (2011)

    Article  Google Scholar 

  48. Matott, L.S., Rabideau, A.J., Craig, J.R.: Pump-and-treat optimization using analytic element method flow models. Adv. Water Resour. 29(5), 760–775 (2006)

    Article  Google Scholar 

  49. McKay, M.D., Beckman, R.J., Conover, W.J.: A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 21(2), 239–245 (1979)

    MathSciNet  MATH  Google Scholar 

  50. Mehmani, A., Chowdhury, S., Messac, A.: Predictive quantification of surrogate model fidelity based on modal variations with sample density. Struct. Multidiscipl. Optim. 52(2), 353–373 (2015)

    Article  Google Scholar 

  51. Mladenović, N., Hansen, P.: Variable neighborhood search. Comput. Oper. Res. 24(11), 1097–1100 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  52. Moré, J.J., Wild, S.M.: Benchmarking derivative-free optimization algorithms. SIAM J. Optim. 20(1), 172–191 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  53. Müller, J., Piché, R.: Mixture surrogate models based on Dempster-Shafer theory for global optimization problems. J. Glob. Optim. 51(1), 79–104 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  54. Müller, J., Shoemaker, C.A.: Influence of ensemble surrogate models and sampling strategy on the solution quality of algorithms for computationally expensive black-box global optimization problems. J. Glob. Optim. 60(2), 123–144 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  55. Orr, M.J.L.: Introduction to radial basis function networks. Technical report, Center for Cognitive Science, University of Edinburgh (1996)

  56. Peremezhney, N., Hines, E., Lapkin, A., Connaughton, C.: Combining gaussian processes, mutual information and a genetic algorithm for multi-target optimization of expensive-to-evaluate functions. Eng. Optim. 46(11), 1593–1607 (2014)

    Article  MathSciNet  Google Scholar 

  57. Queipo, N.V., Haftka, R.T., Shyy, W., Goel, T., Vaidyanathan, R., Tucher, P.K.: Surrogate-based analysis and optimization. Prog. Aerosp. Sci. 41(1), 1–28 (2005)

    Article  Google Scholar 

  58. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. The MIT Press, Cambridge (2006)

    MATH  Google Scholar 

  59. Regis, R.G.: Stochastic radial basis function algorithms for large-scale optimization involving expensive black-box objective and constraint functions. Comput. Oper. Res. 38(5), 837–853 (2011)

    Article  MathSciNet  Google Scholar 

  60. Regis, R.G.: Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points. Eng. Optim. 46(2), 218–243 (2014)

    Article  MathSciNet  Google Scholar 

  61. Shi, R., Liu, L., Long, T., Liu, J.: An efficient ensemble of radial basis functions method based on quadratic programming. Eng. Optim. 48(7), 1202–1225 (2016)

    Article  MathSciNet  Google Scholar 

  62. Stigler, S.M.: Gergonne’s 1815 paper on the design and analysis of polynomial regression experiments. Hist. Math. 1(4), 431–439 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  63. Stone, M.: An asymptotic equivalence of choice of model by cross-validation and Akaike’s criterion. J. R. Stat. Soc. Ser. B (Methodol.) 39(1), 44–47 (1977)

    MathSciNet  MATH  Google Scholar 

  64. Taddy, M.A., Gramacy, R.B., Polson, N.G.: Dynamic trees for learning and design. J. Am. Stat. Assoc. 106(493), 109–123 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  65. Talgorn, B., Le Digabel, S., Kokkolaras, M.: Statistical surrogate formulations for simulation-based design optimization. J. Mech. Des. 137(2), 021405-1–021405-18 (2015)

    Article  Google Scholar 

  66. Tarpey, T.: A note on the prediction sum of squares statistic for restricted least squares. Am. Stat. 54(2), 116–118 (2000)

    MathSciNet  Google Scholar 

  67. Tenne, Y.: An adaptive-topology ensemble algorithm for engineering optimization problems. Optim. Eng. 16(2), 303–334 (2014)

    Article  MATH  Google Scholar 

  68. Tosserams, S., Etman, L.F.P., Rooda, J.E.: A classification of methods for distributed system optimization based on formulation structure. Struct. Multidiscipl. Optim. 39(5), 503–517 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  69. Tournemenne, R., Petiot, J.-F., Talgorn, B., Kokkolaras, M., Gilbert, J.: Brass instruments design using physics-based sound simulation models and surrogate-assisted derivative-free optimization. J. Mech. Des. 139(4), 041401-01–04140-19 (2017)

    Article  Google Scholar 

  70. Tribes, C., Dubé, J.-F., Trépanier, J.-Y.: Decomposition of multidisciplinary optimization problems: formulations and application to a simplified wing design. Eng. Optim. 37(8), 775–796 (2005)

    Article  MathSciNet  Google Scholar 

  71. Vaz, A.I.F., Vicente, L.N.: A particle swarm pattern search method for bound constrained global optimization. J. Glob. Optim. 39(2), 197–219 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  72. Viana, F.A.C., Haftka, R.T., Valder Jr., S., Butkewitsch, S., Leal, M.F.: Ensemble of Surrogates: a Framework based on Minimization of the Mean Integrated Square Error. In: 49th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials, Schaumburg, IL (2008)

  73. Viana, F.A.C., Haftka, R.T., Watson, L.T.: Efficient global optimization algorithm assisted by multiple surrogate techniques. J. Glob. Optim. 56(2), 669–689 (2013)

    Article  MATH  Google Scholar 

  74. Watson, G.S.: Smoothing and interpolation by kriging and with splines. Math. Geol. 16, 601–615 (1984)

    Article  MathSciNet  Google Scholar 

  75. Wild, S.M., Regis, R.G., Shoemaker, C.A.: ORBIT: optimization by radial basis function interpolation in trust-regions. SIAM J. Sci. Comput. 30(6), 3197–3219 (2008)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work has been supported partially by FRQNT Grant 2015-PR-182098; B. Talgorn and M. Kokkolaras are grateful for the partial support of NSERC/Hydro-Québec Grant EGP2 498903-16; such support does not constitute an endorsement by the sponsors of the opinions expressed in this article. The authors would like to thank Robin Tournemenne (École Centrale de Nantes) for his insightful questions and comments that contributed significantly to improve the search algorithm used in this work. The authors would also like to express their gratitude to Stéphane Alarie (IREQ) for his support, which made this work possible.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bastien Talgorn.

Appendices

Greedy selection algorithm

figure i

This greedy algorithm is used in Sect. 2.2 to filter the set of projection candidates and in Sect. 3.1.2 to select a subset of the training points around which the radial basis functions will be centered. For the projection step, the inputs of Algorithm 2 are:

$$\begin{aligned} S_{in} \leftarrow&~ S_{Proj} \text { (Set of projection candidates),}\\ \mathbf {x}_0 \leftarrow&~ \mathbf {x}^S_t(\text { Non projected solution of}~(\hat{P})), \\ p_{out} \leftarrow&~ 100n. \end{aligned}$$

For the selection of the RBF kernels, the inputs of Algorithm 2 are:

$$\begin{aligned} S_{in} \leftarrow&~ \mathbf {X}(\text {Set of points previously evaluated}),\\ \mathbf {x}_0 \leftarrow&~ \mathbf {x}^*_t (\text {Incumbent solution of}~(P)), \\ p_{out} \leftarrow&~ {q^{\textit{RBF}} (\text {Number of radial basis functions}, q^{\textit{RBF}} = \min \{ p/2 , 10n \})}. \end{aligned}$$

In both cases, the goal of this algorithm is to select a set \(S_{out}\) of \(p_{out}\) points from the set \(S_{in}\in {\mathbb {R}}^n\). The set \(S_{out}\) must be spread as widely as possible in \({\mathbb {R}}^n\) while favoring points that are close to a target \(\mathbf {x}_{0}\). Since these two goals can be conflicting, a trade-off parameter \(\lambda \) is introduced. When many points that are close to \(\mathbf {x}_{0}\) have been selected, it is possible that the goal of selecting even more points close to \(\mathbf {x}_{0}\) will lead to selecting a point that is already selected. In that case, the trade-off coefficient \(\lambda \) is decreased.

Description of the analytical problems

1.1 MAD6 [44]

The original formulation uses seven variables, but two linear equality constraints allow to express the problem with five variables as follows:

$$\begin{aligned} \underset{\mathbf {x}\in {\mathbb {R}}^{5}}{\min }&\,\, \underset{1\le i \le 163}{\max }f_i(\mathbf {x}) \\ \text {s.t.}&\left\{ \begin{array}{l} c_1(\mathbf {x}) =-x_1 + 0.4 \le 0 \\ c_2(\mathbf {x}) = x_1 - x_2 + 0.4 \le 0 \\ c_3(\mathbf {x}) = x_2 - x_3 + 0.4 \le 0 \\ c_4(\mathbf {x}) = x_3 - x_4 + 0.4 \le 0 \\ c_5(\mathbf {x}) = x_4 - x_5 + 0.4 \le 0 \\ c_6(\mathbf {x}) =-x_4+x_5 - 0.6 \le 0 \\ c_7(\mathbf {x}) = x_4 - 2.1 \le 0 \\ \end{array}\right. \end{aligned}$$

where, for \(1\le i \le 163\),

$$\begin{aligned} f_i(\mathbf {x}) = \frac{1}{15}+\frac{2}{15} \left( \left( \sum _{k=1}^{5} \cos (2\pi x_k \sin v_i) \right) + \cos \big (2\pi (1+x_4) \sin v_i\big ) + \cos (7\pi \sin v_i) \right) \end{aligned}$$

and

$$\begin{aligned} v_i= & {} \frac{\pi }{180}\left( 8.5+\frac{i}{2}\right) .\\ \mathbf {x}_0= & {} \left[ \begin{array}{c} 0.5 \\ 1 \\ 1.5 \\ 2 \\ 2.5 \end{array}\right] , ~f(\mathbf {x}_0)=0.22052,~ \mathbf {x}^*= \left[ \begin{array}{c} 0.4 \\ 0.819839074 \\ 1.219839074 \\ 1.69398531 \\ 2.09398531 \end{array}\right] , ~f(\mathbf {x}^*)=0.101831 . \end{aligned}$$

Best value achieved in this work: \(f=0.101831\).

1.2 CRESCENT [1, 10]

$$\begin{aligned} \underset{\mathbf {x}\in {\mathbb {R}}^{10}}{\min }&x_{10} \\ \text {s.t.}&\left\{ \begin{array}{l} c_1(\mathbf {x}) = \sum _{i=1}^{10}(x_i-1)^2 - 100 \le 0 \\ c_2(\mathbf {x}) = \sum _{i=1}^{10}(x_i+1)^2 - 100 \le 0 \end{array}\right. \\ \mathbf {x}_0= & {} [10 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 \; 0 ]^\top ,~f(\mathbf {x}_0)=0,\\ \mathbf {x}^*= & {} [1 \; 1 \; 1 \; 1 \; 1 \; 1 \; 1 \; 1 \; 1 \; -{}9 ]^\top ,~f(\mathbf {x}^*)=-9. \end{aligned}$$

Best value achieved in this work: \(f = -9\).

1.3 SNAKE [10, 19]

$$\begin{aligned} \underset{\mathbf {x}\in {\mathbb {R}}^{2}}{\min }&\sqrt{ (x_1-20)^2 + (x_2-1)^2 } \\ \text {s.t.}&\left\{ \begin{array}{l} c_1(\mathbf {x}) = \sin x_1 -\frac{1}{10} - x_2 \le 0 \\ c_2(\mathbf {x}) = x_2 - \sin x_1 \le 0 \end{array}\right. \\ \mathbf {x}_0= & {} [0 \; -{}10]^\top , ~f(\mathbf {x}_0)=\infty \text{( }\mathbf {x}_0 \text{ is } \text{ infeasible) },\\ \mathbf {x}^*= & {} [20.02887 \; 0.92434]^\top ,~ f(\mathbf {x}^*)=0.08098094. \end{aligned}$$

Best value achieved in this work: \(f=0.08098094\).Footnote 1

1.4 HS24 [35]

$$\begin{aligned} \underset{\mathbf {x}\in {\mathbb {R}}^{2}}{\min }&\frac{(x_1-3)^2-9}{27 \sqrt{3}} x_2^3 \\ \text {s.t.}&\left\{ \begin{array}{l} c_1(\mathbf {x}) = - \frac{x_1}{\sqrt{3}} + x_2 \le 0 \\ c_2(\mathbf {x}) = - x_1 - \sqrt{3} x_2 \le 0 \\ c_3(\mathbf {x}) =+ x_1 + \sqrt{3} x_2 - 6 \le 0 \\ x_1 , x_2 \ge 0 \end{array}\right. \\ \mathbf {x}_0= & {} [1 \; 0.5]^\top , ~f(\mathbf {x}_0)=-0.0133646, \\ \mathbf {x}^*= & {} [3 \; 1.7320508071 ]^\top , ~f(\mathbf {x}^*)=-1. \end{aligned}$$

Best value achieved in this work: \(f=-1\).

1.5 HS36 [35]

$$\begin{aligned} \underset{\mathbf {x}\in {\mathbb {R}}^{3}}{\min }&-x_1 x_2 x_3 \\ \text {s.t.}&\left\{ \begin{array}{l} c_1(\mathbf {x}) = x_1 + 2 x_2 + 2 x_3 - 72 \le 0 \\ 0 \le x_1 \le 20 \\ 0 \le x_2 \le 11 \\ 0 \le x_3 \le 42 \end{array}\right. \\ \mathbf {x}_0= & {} [10 \; 10 \; 10]^\top , ~f(\mathbf {x}_0)=-1000,\\ \mathbf {x}^*= & {} [20 \; 11 \; 15]^\top , ~ f(\mathbf {x}^*)=-3300. \end{aligned}$$

Best value achieved in this work: \(f=-3300\).

1.6 HS37 [35]

$$\begin{aligned} \underset{\mathbf {x}\in {\mathbb {R}}^{3}}{\min }&-x_1 x_2 x_3 \\ \text {s.t.}&\left\{ \begin{array}{l} c_1(\mathbf {x}) = x_1 + 2 x_2 + 2 x_3 - 72 \le 0 \\ c_2(\mathbf {x}) = -x_1 - 2 x_2 - 2 x_3 \le 0\\ 0\le x_1, x_2, x_3 \le 42 \\ \end{array}\right. \\ \mathbf {x}_0= & {} [10 \; 10 \; 10]^\top ,~f(\mathbf {x}_0)=-1000,\\ \mathbf {x}^*= & {} [24 \; 12 \; 12]^\top ,~f(\mathbf {x}^*)=-3456. \end{aligned}$$

Best value achieved in this work: \(f=-3456\).

1.7 HS73 [35]

In the original problem, the value of \(x_4\) is computed directly by eliminating one linear equality constraint (\(x_4=1-x_1-x_2-x_3\)). Constraint \(c_3\) is added to handle the bound \(x_4 \ge 0\).

$$\begin{aligned} \underset{\mathbf {x}\in {\mathbb {R}}^{3}}{\min }&24.55 x_1 +26.75 x_2 + 39 x_3 + 40.5 (1-x_1-x_2-x_3) \\ \text {s.t.}&\left\{ \begin{array}{lllll} c_1(\mathbf {x}) &{}= - 2.3 x_1 - 5.6 x_2 - 11.1 x_3 - 1.3 (1-x_1-x_2-x_3) + 5 \le 0 \\ c_2(\mathbf {x}) &{}= -12x_1-11.9x_2-41.8x_3-52.1(1-x_1-x_2-x_3)+21 \\ &{}\quad + 1.645 \sqrt{ 0.28 x_1^2 + 0.19 x_2^2 + 20.5 x_3^2 + 0.62 (1-x_1-x_2-x_3)^2} \le 0 \\ c_3(\mathbf {x}) &{}= x_1+x_2+x_3-1 \le 0 \\ &{}\quad 0 \le x_1, x_2, x_3 \le 1 \\ \end{array}\right. \\ \mathbf {x}_0= & {} \left[ \begin{array}{c} 0.3 \\ 0.3 \\ 0.3 \end{array}\right] , ~f(\mathbf {x}_0)=31.14,~ \mathbf {x}^*= \left[ \begin{array}{c} 0.6355215683 \\ 0 \\ 0.3127018814 \end{array}\right] ,~f(\mathbf {x}^*)=29.8944. \end{aligned}$$

Best value achieved in this work: \(f=29.8944\).

1.8 HS101, 102 and 103 [35]

$$\begin{aligned} \underset{\mathbf {x}\in {\mathbb {R}}^{7}}{\min }&\frac{10 x_1 x_4^2 x_7^a }{x_2 x_6^3} + \frac{15 x_3 x_4}{x_1 x_2^2 x_5 \sqrt{x_7}} + \frac{20 x_2 x_6}{x_1^2 x_4 x_5^2} + \frac{25 x_1^2 x_2^2 \sqrt{x_5} x_7}{x_3 x_6^2} \\ \text {s.t.}&\left\{ \begin{array}{lrl} c_1(\mathbf {x}) = \frac{0.5 \sqrt{x_1} x_7}{x_3 \sqrt{x_6}} + \frac{0.7 x_1^3 x_2 x_6 \sqrt{x_7}}{x_3^2} + \frac{0.2 x_3 x_6^{2/3} \root 4 \of {x_7}}{x_2 \sqrt{x_4}} -1 \le 0 \\ c_2(\mathbf {x}) = \frac{2 x_1 x_5 \root 3 \of {x_7}}{x_3^{3/2} x_6} + \frac{0.1 x_2 x_5}{x_6\sqrt{x_3}\sqrt{x_7}} + \frac{x_2 \sqrt{x_3} x_5}{x_1} + \frac{0.65 x_3 x_5 x_7}{x_2^2 x_6} -1 \le 0 \\ c_3(\mathbf {x}) = \frac{2 x_1 x_5 \root 3 \of {x_7}}{x_3^{3/2} x_6} + \frac{0.1 x_2 x_5}{\sqrt{x_3} x_6 \sqrt{x_7}} + \frac{x_2 \sqrt{x_3} x_5}{x_1} + \frac{0.65 x_3 x_5 x_7}{x_2^2 x_6} -1 \le 0 \\ c_4(\mathbf {x}) = \frac{0.2 x_2 \sqrt{x_5} \root 3 \of {x_7}}{x_1^2 x_4} + \frac{0.3 \sqrt{x_1} x_2^2 x_3 \root 3 \of {x_4} \root 4 \of {x_7}}{x_5^{2/3}} + \frac{0.4 x_3 x_5 x_7^{3/4}}{x_1^3 x_2^2} + \frac{0.5 x_4 \sqrt{x_7}}{x_3^2} -1 \le 0 \\ 0.1 \le x_1, x_2, \ldots , x_6 \le 10 \\ 0.01 \le x_7 \le 10\\ \end{array}\right. \end{aligned}$$

The value of the parameter a varies between the three problems:

$$\begin{aligned} \left\{ \begin{array}{l} \hbox {HS101}: a = -1/4; \\ \hbox {HS102}: a = -1/8; \\ \hbox {HS103}: a = -1/2. \\ \end{array}\right. \end{aligned}$$

The starting point is \( \mathbf {x}_0= [ 6 \; 6 \; 6 \; 6 \; 6 \; 6 \; 6 ]^\top \) with the value \(f(\mathbf {x}_0)=+\infty \) (\(\mathbf {x}_0\) is infeasible) and the best know solutions are:

$$\begin{aligned} \mathbf {x}^*_{\text {HS101}} = \left[ \begin{array}{c} 3.1921264708 \\ 0.7569354058 \\ 2.5119021207 \\ 5.6530753445 \\ 0.871294938 \\ 1.261887906 \\ 0.0558117342 \end{array}\right] , ~\mathbf {x}^*_{\text {HS102}} = \left[ \begin{array}{c} 3.3669180827 \\ 0.7679140551 \\ 2.797994961 \\ 4.1843330406 \\ 0.8791473174 \\ 1.0784651788 \\ 0.0328185121 \end{array}\right] , ~\mathbf {x}^*_{\text {HS103}} = \left[ \begin{array}{c} 2.2198703858 \\ 0.6133345902 \\ 2.105352648 \\ 4.5835873238 \\ 0.8698142544 \\ 1.1828253718 \\ 0.046094896 \end{array}\right] \end{aligned}$$

with

$$\begin{aligned} f(\mathbf {x}^*_{\text {HS101}})=1948.02,~ f(\mathbf {x}^*_{\text {HS102}})=1495.48,~ f(\mathbf {x}^*_{\text {HS103}})=3367.69. \end{aligned}$$

The best values achieved in this work are 2480.94, 1950.26, and 3367.69, for HS101, HS102, and HS103, respectively.

Data profiles

For a given tolerance \(\tau \le 1\), the ratio of solved problems for solver s after i groups of \(n+1\) evaluations is defined as:

$$\begin{aligned} r_{s,i}(\tau ) = \frac{1}{\rho _{\max }} \sum _{\rho =1}^{\rho _{\max }} \mathbb {1}\Big (\delta _{s,\rho ,i} \le \tau \Big ), \end{aligned}$$

where \(\mathbb {1}\) is the indicator function. Note that if \(\tau =1\), a problem is considered solved if a feasible solution has been found. For a given value of \(\tau \), the data profile is the curve corresponding to the ratio of solved problems after i groups of \(n+1\) evaluations [52]. For the set of analytical problems and for the two sets of MDO runs (simplified wing and aircraft range), we show the data profiles for \(\tau \in \{10^{-3},10^{-5},10^{-7}\}\) (see Figs. 910, 11). For the Lockwood problem runs, to which is allocated a much smaller budget of evaluations, we show the data profiles for \(\tau \in \{10^{-1},10^{-2},10^{-3}\}\) (see Fig. 12). Note that for the medium discrepancy curve, the lower the curve value, the better the performance. However, for the data profiles, the higher the better.

Fig. 9
figure 9

Data profiles for the ten analytical problems; \(\tau = 10^{-3}\) (left), \(10^{-5}\) (middle) and \(10^{-7}\) (right)

Fig. 10
figure 10

Data profiles for the 50 optimization runs of the simplified wing problem; \(\tau = 10^{-3}\) (left), \(10^{-5}\) (middle) and \(10^{-7}\) (right)

Fig. 11
figure 11

Data profiles for the 50 optimization runs of aircraft range problem; \(\tau = 10^{-3}\) (left), \(10^{-5}\) (middle) and \(10^{-7}\) (right)

Fig. 12
figure 12

Data profiles for the 50 optimization runs of Lockwood problem; \(\tau = 10^{-1}\) (left), \(10^{-2}\) (middle) and \(10^{-3}\) (right)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Audet, C., Kokkolaras, M., Le Digabel, S. et al. Order-based error for managing ensembles of surrogates in mesh adaptive direct search. J Glob Optim 70, 645–675 (2018). https://doi.org/10.1007/s10898-017-0574-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-017-0574-1

Keywords

Navigation