Skip to main content
Log in

An ensemble weighted average conservative multi-fidelity surrogate modeling method for engineering optimization

  • Original Article
  • Published:
Engineering with Computers Aims and scope Submit manuscript

Abstract

Multi-fidelity (MF) surrogate models have been widely used in engineering optimization problems to reduce the design cost by replacing computat ional expensive simulations. Ignoring the prediction uncertainty of the MF model that is caused by a limited number of samples may result in infeasible solutions. Conservative MF surrogate model, which can effectively improve the feasibility of the constraints, has been a promising way to address this issue. In this paper, an ensemble weighted average (EWA) conservative multi-fidelity modeling method that integrates the performance of different error metrics is proposed. In the proposed method, the bootstrap method and mean-square-error method are reasonably weighted to calculate the safety margin of the MF surrogate model. The weights for the two metrics are determined through an optimization problem, which considers the performance of the two metrics in different subsets of the sample points. The effectiveness of the proposed method is illustrated through several numerical examples and a pressure vessel design problem. Results show that the proposed method constructs a more accurate conservative MF surrogate model than other methods in different problems. Furthermore, applying the constructed conservative MF surrogate model into optimization problems obtains more accurate optimal solutions while ensuring the feasibility of it.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  1. Zhou Q, Wu J, Xue T, Jin P (2019) A two-stage adaptive multi-fidelity surrogate model-assisted multi-objective genetic algorithm for computationally expensive problems. Eng Comput. https://doi.org/10.1007/s00366-019-00844-8

    Article  Google Scholar 

  2. Zhou Q, Wu Y, Guo Z, Hu J, Jin P (2020) A generalized hierarchical co-Kriging model for multi-fidelity data fusion. Struct Multidiscipl Optim. https://doi.org/10.1007/s00158-020-02583-7

    Article  MathSciNet  Google Scholar 

  3. Belyaev M, Burnaev E, Kapushev E, Panov M, Prikhodko P, Vetrov D, Yarotsky D (2016) Gtapprox: Surrogate modeling for industrial design. Adv Eng Softw 102:29–39

    Article  Google Scholar 

  4. Dong H, Li C, Song B, Wang P (2018) Multi-surrogate-based Differential Evolution with multi-start exploration (MDEME) for computationally expensive optimization. Adv Eng Softw 123:62–76

    Article  Google Scholar 

  5. Hu J, Zhou Q, McKeand A, Xie T, Choi S-K (2020) A model validation framework based on parameter calibration under aleatory and epistemic uncertainty. Struct Multidiscipl Optim. https://doi.org/10.1007/s00158-020-02715-z

    Article  Google Scholar 

  6. Qian J, Yi J, Cheng Y et al (2020) A sequential constraints updating approach for Kriging surrogate model-assisted engineering optimization design problem. Eng Comput 36:993–1009. https://doi.org/10.1007/s00366-019-00745-w

    Article  Google Scholar 

  7. Liu J, Yi J, Zhou Q, Cheng Y (2020) A sequential multi-fidelity surrogate model-assisted contour prediction method for engineering problems with expensive simulations. Eng Comput. https://doi.org/10.1007/s00366-020-01043-6

    Article  Google Scholar 

  8. Jiang P, Cheng J, Zhou Q, Shu L, Hu J (2019) Variable-fidelity lower confidence bounding approach for engineering optimization problems with expensive simulations. AIAA J 57(12):5416–5430

    Article  Google Scholar 

  9. Hao P, Feng S, Li Y, Wang B, Chen H (2020) Adaptive infill sampling criterion for multi-fidelity gradient-enhanced kriging model. Struct Multidiscipl Optim 62:353–373

    Article  MathSciNet  Google Scholar 

  10. Ruan X, Jiang P, Zhou Q, Hu J, Shu L (2020) Variable-fidelity probability of improvement method for efficient global optimization of expensive black-box problems. Struct Multidiscipl Optim. https://doi.org/10.1007/s00158-020-02646-9

    Article  MathSciNet  Google Scholar 

  11. Qiu N, Gao Y, Fang J, Sun G, Li Q, Kim NH (2018) Crashworthiness optimization with uncertainty from surrogate model and numerical error. Thin-Walled Struct 129:457–472

    Article  Google Scholar 

  12. Zhao L (2011) Reliability-based design optimization using surrogate model with assessment of confidence level. PHD thesis, University of Iowa,

  13. Zhao L, Choi KK, Lee I, Gorsich D (2013) Conservative surrogate model using weighted Kriging variance for sampling-based RBDO. J Mech Des 135(9):091003

    Article  Google Scholar 

  14. Zhu P, Pan F, Chen W, Viana FA (2013) Lightweight design of vehicle parameters under crashworthiness using conservative surrogates. Comput Ind 64(3):280–289

    Article  Google Scholar 

  15. Viana FA, Picheny V, Haftka RT (2009) Conservative prediction via safety margin: design through cross-validation and benefits of multiple surrogates. In: International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, 2009. pp 741–750

  16. Xiong F, Chen W, Xiong Y, Yang S (2011) Weighted stochastic response surface method considering sample weights. Struct Multidiscipl Optim 43(6):837–849

    Article  Google Scholar 

  17. Picheny V, Kim N-H, Haftka R, Peters J (2006) Conservative estimation of probability of failure. In: 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, 2006. p 7038

  18. Lloyd C, Atkinson PM (2001) Assessing uncertainty in estimates with ordinary and indicator kriging. Comput Geosci 27(8):929–937

    Article  Google Scholar 

  19. Picheny V, Kim N-H, Haftka R, Queipo N (2008) Conservative predictions using surrogate modeling. In: 49th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, 16th AIAA/ASME/AHS Adaptive Structures Conference, 10th AIAA Non-Deterministic Approaches Conference, 9th AIAA Gossamer Spacecraft Forum, 4th AIAA Multidisciplinary Design Optimization Specialists Conference, 2008. p 1716

  20. Acar E, Kale A, Haftka R (2007) Comparing effectiveness of measures that improve aircraft structural safety. J Aerosp Eng 20(3):186–199

    Article  Google Scholar 

  21. Chakraborty S, Chatterjee T, Chowdhury R, Adhikari S (2017) A surrogate based multi-fidelity approach for robust design optimization. Appl Math Model 47:726–744

    Article  MathSciNet  Google Scholar 

  22. Xi Z (2019) Model-based reliability analysis with both model uncertainty and parameter uncertainty. J Mech Des 141(5):051404

    Article  Google Scholar 

  23. Wu Y-T, Shin Y, Sues R, Cesare M (2011) Safety-factor based approach for probability-based design optimization. In: 19th AIAA applied aerodynamics conference, 2001. p 1522

  24. Viana FA, Picheny V, Haftka RT (2009) Safety margins for conservative surrogates. In: 8th World Congress on structural and multidisciplinary optimization, 2009. pp 1–10

  25. Viana FA, Picheny V, Haftka RT (2010) Using cross validation to design conservative surrogates. AIAA J 48(10):2286–2298

    Article  Google Scholar 

  26. Ouyang Q, Lu W, Lin J, Deng W, Cheng W (2017) Conservative strategy-based ensemble surrogate model for optimal groundwater remediation design at DNAPLs-contaminated sites. J Contam Hydrol 203:1–8

    Article  Google Scholar 

  27. Picheny V (2009) Improving accuracy and compensating for uncertainty in surrogate modeling. PHD thesis, University of Florida Gainesville,

  28. Sjöstedt-de Luna S, Young A (2003) The bootstrap and kriging prediction intervals. Scand J Stat 30(1):175–192

    Article  MathSciNet  Google Scholar 

  29. Hu J, Zhou Q, Jiang P, Shao X, Xie T (2018) An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging. Eng Optim 50(1):145–163

    Article  Google Scholar 

  30. Han Z-H, Görtz S (2012) Hierarchical kriging model for variable-fidelity surrogate modeling. AIAA J 50(9):1885–1896

    Article  Google Scholar 

  31. Efron B (1983) Estimating the error rate of a prediction rule: improvement on cross-validation. J Am Stat Assoc 78(382):316–331

    Article  MathSciNet  Google Scholar 

  32. Efron B (1992) Bootstrap methods: another look at the jackknife. In: Kotz S, Johnson NL (eds) Breakthroughs in statistics. Springer Series in Statistics (Perspectives in Statistics). Springer, New York, NY. https://doi.org/10.1007/978-1-4612-4380-9_41

  33. Bae S, Kim NH (2018) An adaptive sampling strategy to minimize uncertainty in reliability analysis using Kriging surrogate model. In: 2018 AIAA Non-Deterministic Approaches Conference, 2018. p 0434

  34. Romero DA, Marin VE, Amon CH (2015) Error metrics and the sequential refinement of kriging metamodels. J Mech Des 137(1):011402

    Article  Google Scholar 

  35. Goel T, Hafkta RT, Shyy W (2009) Comparing error estimation measures for polynomial and kriging approximation of noise-free functions. Struct Multidiscipl Optim 38(5):429

    Article  Google Scholar 

  36. Mehmani A, Chowdhury S, Messac A (2015) Predictive quantification of surrogate model fidelity based on modal variations with sample density. Struct Multidiscipl Optim 52(2):353–373

    Article  Google Scholar 

  37. Hu J, Yang Y, Zhou Q, Jiang P, Shao X, Shu L, Zhang Y (2018) Comparative studies of error metrics in variable fidelity model uncertainty quantification. J Eng Des 29(8–9):512–538

    Article  Google Scholar 

  38. Sedgwick P (2010) Independent samples t test. BMJ 340:c2673

    Article  Google Scholar 

  39. Boggs PT, Tolle JW (1995) Sequential quadratic programming. Acta Numer 4(1):1–51

    Article  MathSciNet  Google Scholar 

  40. Shi Y (2001) Particle swarm optimization: developments, applications and resources. In: Proceedings of the 2001 congress on evolutionary computation (IEEE Cat. No. 01TH8546), 2001. IEEE, pp 81–86

  41. Whitley D (1994) A genetic algorithm tutorial. Stat Comput 4(2):65–85

    Article  Google Scholar 

  42. Habib A, Singh HK, Ray T (2019) A multiple surrogate assisted multi/many-objective multi-fidelity evolutionary algorithm. Inf Sci 502:537–557

    Article  MathSciNet  Google Scholar 

  43. Tao J, Sun G (2019) Application of deep learning based multi-fidelity surrogate model to robust aerodynamic design optimization. Aerosp Sci Technol 92:722–737

    Article  Google Scholar 

  44. Stein M (1987) Large sample properties of simulations using Latin hypercube sampling. Technometrics 29(2):143–151

    Article  MathSciNet  Google Scholar 

  45. Forrester AI, Sóbester A, Keane AJ (2007) Multi-fidelity optimization via surrogate modelling. Proc R Soc Math Phys Eng Sci 463(2088):3251–3269

    MathSciNet  MATH  Google Scholar 

  46. Zhou Q, Wang Y, Choi S-K, Jiang P, Shao X, Hu J, Shu L (2018) A robust optimization approach based on multi-fidelity metamodel. Struct Multidiscipl Optim 57(2):775–797

    Article  Google Scholar 

Download references

Acknowledgements

This research has been supported by the National Natural Science Foundation of China (NSFC) under Grant No. 51805179, and the China Postdoctoral Science Foundation under Grant No. 2020M682396.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huaping Liu.

Ethics declarations

Conflict of interest

The authors declared that they have no conflicts of interest in this work. The authors declare the following financial interests/personal relationships which may be considered as potential competing interests:

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1

The eight numerical examples that are used to test the performance of four error metrics in Sect. 3.1 are listed below. \(y_{{\text{h}}}\) denotes the high-fidelity model that is needed to be approximated and \(y_{{\text{l}}}\) denotes the low-fidelity model.

Problem 1 (P1):

$$\begin{gathered} y_{{\text{l}}} = - \sin (x) - e^{\frac{x}{100}} + 10.3 + 0.03 \times (x - 3)^{2} \hfill \\ y_{{\text{h}}} = - \sin (x) - e^{\frac{x}{100}} + 10, \, x \in [0,10]. \hfill \\ \end{gathered}$$
(27)

Problem 2 (P2):

$$\begin{gathered} y_{{\text{l}}} = 0.5 \times \sin (12x - 4) \times (6x - 2)^{2} + 10 \times (x - 0.5) + 5 \hfill \\ y_{{\text{h}}} = \sin (12x - 4) \times (6x - 2)^{2} {, }x \in [0,1]. \hfill \\ \end{gathered}$$
(28)

Problem 3 (P3):

$$\begin{aligned} y_{{\text{l}}} & = 4 \times (0.7x_{1} )^{2} - 2.1 \times (0.7x_{1} )^{4} + (0.7x_{1} )^{6} /3 \\ & + (0.7x_{1} \times 0.7x_{2} ) - 4 \times (0.7x_{2} )^{2} + 4 \times (0.7x_{2} )^{4} \\ y_{{\text{h}}} & = 4x_{1}^{2} - 2.1x_{1}^{4} + x_{1}^{6} /3 + x_{1} x_{2} - 4x_{2}^{2} + 4x_{2}^{4} , \, x_{1} ,x_{2} \in [ - 2,2]. \\ \end{aligned}$$
(29)

Problem 4 (P4):

$$\begin{gathered} y_{{\text{l}}} = \left( {0.5x_{1} )^{2} + 0.8x_{2} - 11} \right)^{2} + \left( {(0.8x_{2} )^{2} + 0.5x_{1} - 7} \right)^{2} + x_{2}^{3} - (x_{1} + 1)^{2} \hfill \\ y_{{\text{h}}} = (x_{1}^{2} + x_{2} - 11)^{2} + (x_{2}^{2} + x_{1} - 7)^{2} , x_{1} ,x_{2} \in [ - 3,3]. \hfill \\ \end{gathered}$$
(30)

Problem 5 (P5):

$$\begin{gathered} y_{{\text{l}}} = (x_{1} - 1)^{2} + 2 \times (2x_{2}^{2} - 0.75x_{1} )^{2} + 3 \times (3x_{3}^{2} - 0.75x_{2} )^{2} + 4 \times (4x_{4}^{2} - 0.75x_{3} )^{2} \hfill \\ y_{{\text{h}}} = (x_{1} - 1)^{2} + 2 \times (2x_{2}^{2} - x_{1} )^{2} + 3 \times (3x_{3}^{2} - x_{2} )^{2} + 4 \times (4x_{4}^{2} - x_{3} )^{2} \, \hfill \\ \quad x_{1} ,x_{2} ,x_{3} ,x_{4} \in [ - 10,10]. \hfill \\ \end{gathered}$$
(31)

Problem 6 (P6):

$$\begin{aligned} f(x_{1} , \ldots ,x_{6} ) & = - \sum\limits_{i = 1}^{4} {c_{i} \exp \left[ { - \sum\limits_{j = 1}^{6} {ai_{j} (x_{j} - p_{ij} )^{2} } } \right]} , \, x_{j} \in [0,1] \\ [ci] & = [1 \, 1.2 \, 3 \, 3.2]^{{\text{T}}} , \, [aij] = \left[ \begin{gathered} \, 10 \, 3 \, 17 \, 3.05 \, 1.7 \, 8 \hfill \\ 0.05 \, 10 \, 17 \, 0.1 \, 8 \, 4 \hfill \\ \, 3 \, 3.5 \, 1.7 \, 10 \, 17 \, 8 \hfill \\ \, 17 \, 8 \, 0.05 \, 10 \, 0.1 \, 14 \hfill \\ \end{gathered} \right] \\ [p_{ij} ] & = \left[ \begin{gathered} 0.1312 \, 0.1696 \, 0.5569 \, 0.0124 \, 0.8283 \, 0.5886 \hfill \\ 0.2329 \, 0.4139 \, 0.8307 \, 0.3736 \, 0.1004 \, 0.9991 \hfill \\ 0.2348 \, 0.1451 \, 0.3522 \, 0.2883 \, 0.3047 \, 0.6650 \hfill \\ 0.4047 \, 0.8828 \, 0.8732 \, 0.5743 \, 0.1091 \, 0.0381 \hfill \\ \end{gathered} \right] \\ [lc_{i} ] & = [1.1 \, 0.8 \, 2.5 \, 3]^{{\text{T}}} , \, [l_{j} ] = [0.75 \, 1 \, 0.8 \, 1.3 \, 0.7 \, 1.1]^{T} \\ y_{l} & = - \sum\limits_{i = 1}^{4} {lc_{i} \exp [ - \sum\limits_{j = 1}^{6} {a_{ij} (l_{j} x_{j} - p_{ij} )^{2} } ]} \\ y_{h} & = f(x_{1} , \ldots ,x_{6} ), \, x_{j} \in [0,1]. \\ \end{aligned}$$
(32)

Problem 7 (P7):

$$\begin{gathered} f_{{{\text{borehole}}}} = \frac{{2\pi x_{3} (x_{4} - x_{6} )}}{{\ln (x_{2} /x_{1} )\left[ {1 + 2x_{7} x_{4} /\left( {\ln (x_{2} /x_{1} )x_{1}^{2} x_{8} } \right) + x_{3} /x_{5} } \right]}} \hfill \\ y_{{\text{l}}} = 0.4f_{{{\text{borehole}}}} (x) + 0.07x_{1}^{2} x_{8} + x_{1} x_{7} /x_{3} + x_{1} x_{6} /x_{2} + x_{1}^{2} x_{4} \hfill \\ y_{{\text{h}}} = f_{{{\text{borehole}}}} (x) \hfill \\ x_{1} \in [0.05,0.15], \, x_{2} \in [100,50000], \hfill \\ x_{3} \in [63070,115600], \, x_{4} \in [990,1110], \hfill \\ x_{5} \in [63.1,116], \, x_{6} \in [700,820], \hfill \\ x_{7} \in [1120,1680], \, x_{8} \in [9855,12045]. \hfill \\ \end{gathered}$$
(33)

Problem 8 (P8):

$$\begin{gathered} y_{l} = \sum\limits_{i = 1}^{10} {x_{i}^{3} } + \left( {\sum\limits_{i = 1}^{10} {2ix_{i} } } \right)^{2} + \left( {\sum\limits_{i = 1}^{10} {3ix_{i} } } \right)^{4} \hfill \\ y_{h} = \sum\limits_{i = 1}^{10} {x_{i}^{2} } + \left( {\sum\limits_{i = 1}^{10} {0.5ix_{i} } } \right)^{2} + \left( {\sum\limits_{i = 1}^{10} {0.5ix_{i} } } \right)^{4} , \, - 5 \le x_{i} \le 10. \hfill \\ \end{gathered}$$
(34)

Appendix 2

2.1 Influence of the number of LF samples

The test setting for investigating the influence of the number of LF samples is given in Table 9. The numbers of HF samples in P1–P6 and P7–P8 are fixed to 6d and 9d, respectively. While the number of LF samples ranges from one to five times that of HF samples. The comparison results are listed in Table 10 and Fig. 16. In the table, the p values that are larger than 0.05 are marked in bold, while p values that are smaller than 0.05 are marked in italic.

Table 9 The test setting for investigating the influence of the number of LF samples
Fig. 16
figure 16

Performance comparison of the four error metrics under different number of HF samples: a P1 problem; b P2 problem; c P3 problem; d P4 problem; e P5 problem; f P6 problem; g P7 problem; h P8 problem

Table 10 T test results under different number of LF samples

Several results can be obtained from the comparison results: (1) The estimated errors by the MSE method are a good representation of the variation of the true errors in all of the eight test examples, and thus, the MSE method performs the best among the four error metrics. (2) The results of the bootstrap method and the PEMF method can only reflect the variation of the true error in some of the test problems. The results of the LOO method are always higher than the true error. (3) From the t test results, it can be seen that the uncertainty quantification results of the bootstrap method and the LOO method are significantly different from the true errors. While the MSE method and the PEMF method perform slightly better. In a word, the MSE method can better reflect the variation of the true error of the MF surrogate model under different numbers of LF samples. However, no error metric can accurately estimate the magnitude of the true error.

2.2 Influence of the noise

During the sample generation process in engineering design, there are inevitably some noises existing in the response of sample points. To study the influence of the noise on the performance of these error metrics, different levels of noises are added into the HF responses:

$$\hat{y}_{{{\text{nmf}}}} (x) = \hat{y}_{{{\text{mf}}}} (x) + l^{\prime}\delta ,$$
(35)

where \(\hat{y}_{{{\text{mf}}}} ( \cdot )\) is the original response from the MF surrogate model, \(\hat{y}_{{{\text{mf}}\_{\text{n}}}} ( \cdot )\) is the responses with noises, \(l^{\prime}\) is the noisy level ranging from 0 to 15%, and \(\delta\) is randomly selected from the normal distribution \(N(0,1)\). Four different noise levels are set for comparison, as shown in Table 11. The numbers of HF samples and LF samples are fixed to 9d (\(N_{{\text{h}}} = 9d\)) and 25d (\(N_{{\text{l}}} = 25d\)) respectively. The test results are shown in Fig. 17 and Table 12. In the table, the p values that are larger than 0.05 are marked in bold, while p values that are smaller than 0.05 are marked in italic.

Table 11 The test setting for investigating the influence of noise
Fig. 17
figure 17

Performance comparison of the four error metrics under different noise levels

Table 12 T test results under different noise levels

According to the uncertainty quantification results in P1 and P4 problems, it can be seen that the results of the LOO method cannot accurately reflect the variation of the true error in low-dimensional problems. In Fig. 17d, the true error of the MF surrogate model becomes larger with the increasing noise level, but the results of the LOO method gradually decrease. On the other hand, the bootstrap method and the MSE method can reflect the change of the true error accurately. The results in Table 12 also confirm this conclusion: the estimated uncertainty by the bootstrap method is not significantly different from the true error in P1, P3, and P5 problems, especially the pvalue in P5 problem is 0.9729, which is far larger than the confidence level 0.05. The MSE method and the PEMF method perform well in two problems, while the LOO method only performs well in one problem.

2.3 Influence of the correlation between the HF and LF model

The LF model is generally used to reflect the trend of the HF responses; therefore, the correlation between the HF model and the LF model has a vital influence on the accuracy of the MF model. P3 and P6 test problems are used to test the performance of the four error metrics under different correlation coefficients between the HF model and the LF model. Seven different LF models are selected for each HF model, among which the correlation coefficient between the HF and LF model ranges from 0.2 to 0.6. The numbers of HF and LF samples are set to 6d and 25d, respectively. The comparison results are listed in Fig. 18.

Fig. 18
figure 18

Performance comparison of the four error metrics under different correlation coefficients between the HF and LF model: a comparison results in P3 problem; b comparison results in P6 problem

It can be seen that the errors estimated by the bootstrap method are the closest to the change of the true error in the P2 example, while the results from the LOO method differ most from the variation of the true error. When the correlation coefficient between the HF and LF model changes from 0.3 to 0.4, the true error tends to increase. Among all of the four error metrics, only the results from the bootstrap method and the MSE method can reflect this change. In the P6 problem, it can be seen that the estimated errors by the bootstrap method and MSE method are very close, which can reflect the change of the true error well.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, J., Peng, Y., Lin, Q. et al. An ensemble weighted average conservative multi-fidelity surrogate modeling method for engineering optimization. Engineering with Computers 38, 2221–2244 (2022). https://doi.org/10.1007/s00366-020-01203-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00366-020-01203-8

Keywords

Navigation