Skip to main content
Log in

Global sensitivity analysis using polynomial chaos expansion enhanced Gaussian process regression method

  • Original Article
  • Published:
Engineering with Computers Aims and scope Submit manuscript

Abstract

Global sensitivity analysis (GSA) is a commonly used approach to explore the contribution of input variables to the model output and identify the most important variables. However, performing GSA typically requires a large number of model evaluations, which can result in a heavy computational burden, particularly when the model is computationally expensive. To address this issue, an efficient Sobol index estimator is proposed in this paper using polynomial chaos expansion (PCE) enhanced Gaussian process regression (GPR) method, namely PCEGPR. The orthogonal polynomial functions of PCE method are incorporated into GPR surrogate model to construct the kernel function. An estimation scheme based on fixed-point iteration and leave-one-out cross-validation error is presented to determine the optimal parameters of PCEGPR method. The analytical expressions of main and total sensitivity indices are also derived by considering the posterior predictor and covariance of PCEGPR surrogate model. The effectiveness of the proposed estimator is demonstrated by four numerical examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Data availability

Data generated during this study are available from the corresponding author upon reasonable request.

References

  1. Li S, Tang ZC (2018) An efficient numerical simulation method for evaluations of uncertainty analysis and sensitivity analysis of system with mixed uncertainties. Adv Mech Eng 10(10):1687814018800533

    Google Scholar 

  2. Cheng K, Lu ZZ, Ling CN, Zhou ST (2020) Surrogate-assisted global sensitivity analysis: an overview. Struct Multidiscip Optim 61(3):1187–1213

    MathSciNet  Google Scholar 

  3. Sun X, Choi Y, Choi J (2020) Global sensitivity analysis for multivariate outputs using polynomial chaos-based surrogate models. Appl Math Model 82:867–887

    MathSciNet  Google Scholar 

  4. Wang P, Li CY, Liu FC, Zhou HY (2021) Global sensitivity analysis of failure probability of structures with uncertainties of random variable and their distribution parameters. Eng Comput. https://doi.org/10.1007/s00366-021-01484-7

    Article  Google Scholar 

  5. Wei PF, Lu ZZ, Song JW (2015) Variable importance analysis: a comprehensive review. Reliab Eng Syst Saf 142:399–432

    Google Scholar 

  6. Wang Lu, Zhang XB, Li GJ, Lu ZZ (2022) Credibility distribution function based global and regional sensitivity analysis under fuzzy uncertainty. Eng Comput 38:1349–1362

    Google Scholar 

  7. Saltelli A, Annoni P (2010) How to avoid a perfunctory sensitivity analysis. Environ Model Softw 25(12):1508–1517

    Google Scholar 

  8. Kucherenko S, Song SF, Wang L (2019) Different numerical estimators for main effect global sensitivity indices. Reliab Eng Syst Saf 165:222–238

    Google Scholar 

  9. Borgonovo E, Hazen GB, Plischke E (2016) A common rationale for global sensitivity measures and their estimation. Risk Anal 36(10):1871–1895

    Google Scholar 

  10. Janon A, Klein T, Lagnoux A, Nodet M, Prieur C (2014) Asymptotic normality and efficiency of two Sobol index estimators. ESAIM-Probab Stat 18:342–364

    MathSciNet  Google Scholar 

  11. Sobol IM (2001) Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Math Comput Simul 55:271–280

    MathSciNet  Google Scholar 

  12. Burnaev E, Panin I, Sudret B (2017) Efficient design of experiments for sensitivity analysis based on polynomial chaos expansions. Ann Math Artif Intell 81(1–2):187–207

    MathSciNet  Google Scholar 

  13. Qian E, Peherstorfer B, O’Malley D, Vesselinov W, Willcox K (2018) Multifidelity Monte Carlo estimation of variance and sensitivity indices. SIAM/ASA J Uncertain Quantif 6(2):683–706

    MathSciNet  Google Scholar 

  14. Damblin G, Ghione A (2021) Adaptive use of replicated Latin hypercube designs for computing Sobol’ sensitivity indices. Reliab Eng Syst Saf 212:107507

    Google Scholar 

  15. Janouchova E, Kucerova A (2013) Competitive comparison of optimal designs of experiments for sampling-based sensitivity analysis. Comput Struct 124:47–60

    Google Scholar 

  16. Borgonovo E, Plischke E (2016) Sensitivity analysis: a review of recent advances. Eur J Oper Res 246(3):869–887

    MathSciNet  Google Scholar 

  17. Viana FAC, Gogu C, Goel T (2021) Surrogate modeling: tricks that endured the test of time and some recent developments. Struct Multidiscip Optim 64(5):2881–2908

    Google Scholar 

  18. Bhosekar A, Ierapetritou M (2018) Advances in surrogate based modeling, feasibility analysis, and optimization: a review. Comput Chem Eng 108:250–267

    Google Scholar 

  19. Zhu ZG, Ji HB, Li L (2023) Deep multi-modal subspace interactive mutual network for specific emitter identification. IEEE Trans Aerosp Electron Syst. https://doi.org/10.1109/TAES.2023.3240115

    Article  Google Scholar 

  20. Yang HQ, Wang ZH, Song KL (2022) A new hybrid grey wolf optimizer-feature weighted-multiple kernel-support vector regression technique to predict TBM performance. Eng Comput 38:2469–2485

    Google Scholar 

  21. Schöbi R, Sudret B, Wiart J (2015) Polynomial-chaos-based Kriging. Int J Uncertain Quantif 5:171–193

    MathSciNet  Google Scholar 

  22. Shang XB, Ma P, Chao T, Yang M (2020) A sequential experimental design for multivariate sensitivity analysis using polynomial chaos expansion. Eng Optim 52(8):1382–1400

    MathSciNet  Google Scholar 

  23. Kaintura A, Spina D, Couckuyt I, Knockaert L, Bogaerts W, Dhaene T (2017) A kriging and stochastic collocation ensemble for uncertainty quantification in engineering applications. Eng Comput 33(4):935–949

    Google Scholar 

  24. Liu FC, He PF, Dai Y (2023) A new Bayesian probabilistic integration framework for hybrid uncertainty propagation. Appl Math Model 117:296–315

    MathSciNet  Google Scholar 

  25. Jiang C, Hu Z, Liu YX, Mourelatos ZP, Gorsich D, Jayakumar P (2020) A sequential calibration and validation framework for model uncertainty quantification and reduction. Comput Methods Appl Mech Eng 368:113172

    MathSciNet  Google Scholar 

  26. Yang MD, Zhang DQ, Han X (2022) Efficient local adaptive kriging approximation method with single-loop strategy for reliability-based design optimization. Eng Comput 38(3):2431–2449

    Google Scholar 

  27. Sudret B (2008) Global sensitivity analysis using polynomial chaos expansions. Reliab Eng Syst Saf 93(7):964–979

    Google Scholar 

  28. Crestaux T, Le Maitre O, Martinez JM (2009) Polynomial chaos expansion for sensitivity analysis. Reliab Eng Syst Saf 94(7):1161–1172

    Google Scholar 

  29. Palar PS, Tsuchiya T, Parks GT (2016) Multi-fidelity non-intrusive polynomial chaos based on regression. Comput Methods Appl Mech Eng 307:489–490

    MathSciNet  Google Scholar 

  30. Palar PS, Zuhal LR, Shimoyama K, Tsuchiya T (2018) Global sensitivity analysis via multi-fidelity polynomial chaos expansion. Reliab Eng Syst Saf 170:175–190

    Google Scholar 

  31. Bhattacharyya B (2020) Global sensitivity analysis: A Bayesian learning based polynomial chaos approach. J Comput Phys 415:109539

    MathSciNet  Google Scholar 

  32. Guo L, Narayan A, Zhou T (2018) A gradient enhanced ℓ1-minimization for sparse approximation of polynomial chaos expansions. J Comput Phys 367:49–64

    MathSciNet  Google Scholar 

  33. Peng J, Hampton J, Doostan A (2016) On polynomial chaos expansion via gradient-enhanced ℓ1-minimization. J Comput Phys 310:440–458

    MathSciNet  Google Scholar 

  34. Chen LM, Qiu HB, Jiang C, Xiao M, Gao L (2018) Support Vector enhanced Kriging for metamodeling with noisy data. Struct Multidiscip Optim 57(4):1611–1623

    Google Scholar 

  35. Marrel A, Iooss B, Laurent B, Roustant O (2009) Calculations of Sobol indices for the Gaussian process metamodel. Reliab Eng Syst Saf 94(3):742–751

    Google Scholar 

  36. De Lozzo M, Marrel A (2016) Estimation of the derivative-based global sensitivity measures using a Gaussian process metamodel. SIAM-ASA J Uncertain Quantif 4(1):708–738

    MathSciNet  Google Scholar 

  37. Zhou YC, Lu ZZ, Cheng K, Yun WY (2019) A Bayesian Monte Carlo-based method for efficient computation of global sensitivity indices. Mech Syst Signal Process 117:498–516

    Google Scholar 

  38. Cheng K, Lu ZZ (2018) Adaptive sparse polynomial chaos expansions for global sensitivity analysis based on support vector regression. Comput Struct 194:86–96

    Google Scholar 

  39. Tang KK, Congedo PM, Abgrall R (2016) Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation. J Comput Phys 214:557–589

    MathSciNet  Google Scholar 

  40. Lin Q, Hu DW, Hu JX, Cheng YS, Zhou Q (2021) A screening-based gradient-enhanced Gaussian process regression model for multi-fidelity data fusion. Adv Eng Inform 50:101437

    Google Scholar 

  41. Arthur CK, Temeng VA, Ziggah YY (2020) Novel approach to predicting blast-induced ground vibration using Gaussian process regression. Eng Comput 36(1):2942

    Google Scholar 

  42. Hadigol M, Doostan A (2018) Least squares polynomial chaos expansion: a review of sampling strategies. Comput Methods Appl Mech Eng 332:382–407

    MathSciNet  Google Scholar 

  43. Cheng K, Lu ZZ, Zhou YC, Shi Y, Wei YH (2017) Global sensitivity analysis using support vector regression. Appl Math Model 49:587–598

    MathSciNet  Google Scholar 

  44. Moghaddam VH, Hamidzadeh J (2016) New Hermite orthogonal polynomial kernel and combined kernels in support vector machine classifier. Pattern Recogn 60:921–935

    Google Scholar 

  45. Cheng K, Lu ZZ, Zhen Y (2019) Multi-level multi-fidelity sparse polynomial chaos expansion based on Gaussian process regression. Comput Methods Appl Mech Eng 349:360–377

    MathSciNet  Google Scholar 

  46. Yan L, Duan XJ, Liu BW, Xu J (2018) Gaussian processes and polynomial chaos expansion for regression problem: linkage via the RKHS and comparison via the KL divergence. Entropy 20(3):191

    Google Scholar 

  47. Cheng K, Lu ZZ, Xiao SN, Oladyshkin S, Nowak W (2022) Mixed covariance function Kriging model for uncertainty quantification. Int J Uncertain Quantif 12(3):17–30

    MathSciNet  Google Scholar 

  48. Liu HT, Cai JF, Ong YS (2017) An adaptive sampling approach for kriging metamodeling by maximizing expected prediction error. Comput Chem Eng 106:171–182

    Google Scholar 

  49. Tipping ME (2001) Sparse Bayesian learning and the relevance vector machine. J Mach Learn Res 1(3):211–244

    MathSciNet  Google Scholar 

  50. Shang XB, Chao T, Ma P, Yang M (2020) An efficient local search-based genetic algorithm for constructing optimal Latin hypercube design. Eng Optim 52(2):271–287

    MathSciNet  Google Scholar 

  51. Shang XB, Su L, Fang H, Zeng BW, Zhang Z (2023) An efficient multi-fidelity Kriging surrogate model-based method for global sensitivity analysis. Reliab Eng Syst Saf 229:108858

    Google Scholar 

  52. Kucherenko S, Feil B, Shah N, Mauntz W (2011) The identification of model effective dimensions using global sensitivity analysis. Reliab Eng Syst Saf 96(4):440–449

    Google Scholar 

  53. Wu ZP, Wang WJ, Wang DH, Zhao K, Zhang WH (2019) Global sensitivity analysis using orthogonal augmented radial basis function. Reliab Eng Syst Saf 185:291–302

    Google Scholar 

  54. Wipf DP, Rao BD (2004) Sparse Bayesian learning for basis selection. IEEE Trans Signal Process 52(8):2153–2164

    MathSciNet  Google Scholar 

Download references

Acknowledgements

This work is supported by the Natural Science Foundation of Heilongjiang Province of China [grant number: LH2023F022], the National Natural Science Foundation of China [grant numbers: 62173103, 52171299], and the Fundamental Research Funds for the Central Universities of China [grant numbers: 3072022JC0402, 3072022JC0403].

Funding

Natural Science Foundation of Heilongjiang Province of China, LH2023F022, National Natural Science Foundation of China, 62173103, 52171299, Fundamental Research Funds for the Central Universities of China, 3072022JC0402, 3072022JC0403.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhi Zhang.

Ethics declarations

Conflict of interest

The authors declare that there are no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

In the general case, the random noise \(\upsilon\) with zero mean and \(\sigma^{2}\) variance, i.e., \(\upsilon \sim \boldsymbol{\mathcal{N}}\left( {0,\sigma^{2} } \right)\) is considered in the training data. Then, the covariance matrix in Eq. (20) can be written as \(\Lambda = \Phi^{T} W\Phi + \sigma^{2}I\), where \(I\) denotes the identical matrix. The maximum likelihood method is used to compute \(\mu\) and \(W\). The likelihood function is written as

$$p\left( {\boldsymbol{\mathcal{Y}}|\mu ,W} \right) = \left( {2\pi } \right)^{ - N/2} \left| \Lambda \right|^{ - 1/2} \exp \left( { - \frac{{{\varvec{\beta}}^{T} \Lambda^{ - 1} {\varvec{\beta}}}}{2}} \right)$$
(66)

Taking the logarithm operator of likelihood function, the log-likelihood function is given as

$$\boldsymbol{\mathcal{L}}\left( {\mu ,W} \right) = \ln p\left( {\boldsymbol{\mathcal{Y}}|\mu ,W} \right) = - \frac{N}{2}\ln \left( {2\pi } \right) - \frac{1}{2}\ln \left| \Lambda \right| - \frac{1}{2}{\varvec{\beta}}^{T} \Lambda^{ - 1} {\varvec{\beta}}$$
(67)

Taking the derivative of \(\boldsymbol{\mathcal{L}}\left( {\mu ,W} \right)\) with respect to \(\mu\) and setting it to zero, we have

$$\frac{{\partial \boldsymbol{\mathcal{L}}\left( {\mu ,W} \right)}}{\partial \mu } = \frac{{\partial \beta^{T} \Lambda^{ - 1} \beta }}{\partial \mu } = 2F^{T} \Lambda^{ - 1} \left( {\boldsymbol{\mathcal{Y}} - F\mu } \right) = 0$$
(68)

Then, the maximum likelihood estimation of \(\mu\) is given as

$$\hat{\mu } = \left( {F^{T} \Lambda^{ - 1} F} \right)^{ - 1} F^{T} \Lambda^{ - 1} \boldsymbol{\mathcal{Y}}$$
(69)

The second term of \(\boldsymbol{\mathcal{L}}\left( {\mu ,W} \right)\) in Eq. (67) can be written as

$$\ln \left| \Lambda \right| = \ln \left( {\left| {\sigma^{2} I} \right|\left| C \right|\left| W \right|} \right) = \ln \left| C \right| + N\ln \sigma^{2} + \ln \left| W \right|$$
(70)

where \(C = \sigma^{ - 2} \Phi \Phi^{T} + W^{ - 1}\). According to Woodbury inversion [54], \(\Lambda^{ - 1}\) is reformulated as

$$\Lambda^{ - 1} = \left( {\Phi^{T} W\Phi + \sigma^{2}I } \right)^{ - 1} = \sigma^{ - 2}I - \sigma^{ - 2} \Phi^{T} C^{ - 1} \Phi \sigma^{ - 2}$$
(71)

and the vector of PCE coefficients PCE in Eq. (25) is also deduced as

$${\varvec{\lambda}} = W\Phi \left( {\Phi^{T} W\Phi + \sigma^{2}I} \right)^{ - 1} {\varvec{\beta}} = \sigma^{ - 2} C^{ - 1} \Phi {\varvec{\beta}}$$
(72)

Based on Eqs. (71) and (72), the third term of \(\boldsymbol{\mathcal{L}}\left( {\mu ,W} \right)\) is derived as [49, 54]

$${\varvec{\beta}}^{T} \Lambda^{ - 1} {\varvec{\beta}} = \sigma^{ - 2} \left( {{\varvec{\beta}} - \Phi^{T} {\varvec{\lambda}}} \right)^{T} \left( {{\varvec{\beta}} - \Phi^{T} {\varvec{\lambda}}} \right) + {\varvec{\lambda}}^{T} W^{ - 1} {\varvec{\lambda}}$$
(73)

Taking the derivative of \(\boldsymbol{\mathcal{L}}\left( {\mu ,W} \right)\) with respect to \(w_{{{\varvec{\alpha}}_{i} }}\) and forcing it to zero, the following equation is obtained

$$\frac{{\partial \boldsymbol{\mathcal{L}}\left( {\mu ,W} \right)}}{{\partial w_{{{\varvec{\alpha}}_{i} }} }} = \frac{{C_{{{\varvec{\alpha}}_{i} {\varvec{\alpha}}_{i} }}^{ - 1} - w_{{{\varvec{\alpha}}_{i} }}^{2} + \lambda_{{{\varvec{\alpha}}_{i} }}^{2} }}{{w_{{{\varvec{\alpha}}_{i} }}^{3} }} = 0$$
(74)

where \(C_{{{\varvec{\alpha}}_{i} {\varvec{\alpha}}_{i} }}^{ - 1}\) is the ith diagonal element of \(C^{ - 1}\). Then the estimation of \(w_{{{\varvec{\alpha}}_{i} }}\) is given as

$$\hat{w}_{{{\varvec{\alpha}}_{i} }} = \left( {C_{{{\varvec{\alpha}}_{i} {\varvec{\alpha}}_{i} }}^{ - 1} + \lambda_{{{\varvec{\alpha}}_{i} }}^{2} } \right)^{1/2}$$
(75)

In Sect. 4.1, the random noise is not taken into consideration in the GPR modeling, i.e., \(\sigma^{2} = 0\). Therefore, Eq. (69) is transformed as

$$\hat{\mu } = \left( {F^{T} R^{ - 1} F} \right)^{ - 1} F^{T} R^{ - 1} \boldsymbol{\mathcal{Y}}$$
(76)

and \(C^{ - 1} = \sigma^{2} \left( {\Phi \Phi^{T} + \sigma^{2} W^{ - 1} } \right)^{ - 1}\) is equal to 0. Then, \(\hat{w}_{{{\varvec{\alpha}}_{i} }} = \lambda_{{{\varvec{\alpha}}_{i} }}\) and

$$\hat{\user2{w}} = {\varvec{\lambda}} = W\Phi \left( {\Phi^{T} W\Phi } \right)^{ - 1} {\varvec{\beta}} = W\Phi {\varvec{\tau}}$$
(77)

To compute the values of \(\hat{\mu }\) and \(\hat{\user2{w}}\), the fixed-point iteration is used to solve Eqs. (76) and (77) in Sect. 4.1.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shang, X., Zhang, Z., Fang, H. et al. Global sensitivity analysis using polynomial chaos expansion enhanced Gaussian process regression method. Engineering with Computers 40, 1231–1246 (2024). https://doi.org/10.1007/s00366-023-01851-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00366-023-01851-6

Keywords

Navigation