Novel model calibration method via non-probabilistic interval characterization and Bayesian theory
Introduction
Experimental tests are an important means in engineering analysis; however, the expenses caused by extensive tests are always considerable [1], [2]. Moreover, in a complex environment, certain physical quantities are difficult to directly measure. Nowadays, with the rapid development of computing capacity, computational models play an increasingly prominent role in complex engineering systems. Before extending a computational model toward its practical application, a crucial question needs to be considered [3], [4], [5]: how can the confidence in modeling and simulating accuracy be evaluated? Model validation, defined as “the process of determining the degree to which a computational model is an accurate representation of the real world” [6], is an efficient technique to assess the correctness of the model for the intended purpose. This is often contrasted with model verification, which is the assessment of the solution accuracy of a model. Moreover, model validation is often implemented together with model calibration, which seeks to correct the model by adjusting the model parameters. Recently, model validation has become an increasingly important subject in both academic and industrial research [7], [8], [9]. In the area of computational mechanics, the American Institute of Aeronautics and Astronautics (AIAA) and the American Society of Mechanical Engineers (ASME) formulated long-term plans and published specific guidance documents for model validation [6], [10]. In 2000, the concept of model validation has been particularly emphasized in the Accelerated Strategic Computing Initiative program by the U.S. Department of Energy (DoE) [11]. To increase the general awareness of various available methods for model validation, the Sandia National Laboratories have organized a workshop in 2006, where three challenge problems were provided for discussion [12], [13].
The available model validation methods can be grouped into two categories: the deterministic framework and the uncertain framework [14]. Many conventional model validation activities are typically conducted within a deterministic framework, where information of both experimental data and numerical prediction is considered to be unambiguous [15]. However, various uncertainties are unavoidable in practical engineering due to manufacturing errors, measurement inaccuracy, and incomplete knowledge [16], [17]. Comparatively speaking, the uncertainty-based model validation, where a more comprehensive account for the presence and importance of uncertainties is needed, will be more practical in the context of the modern industry [18], [19], [20], [21], [22]. In the research work of Kennedy and O'Hagan, several different sources of uncertainties were characterized from the computational models and experimental observations [23]. Focusing on the stochastic validation associated with uncertainty in both predictions and experiments, Chen et al. introduced several mathematical examples to assess the four main types of validation metrics [24]. Using the stochastic uncertainty propagation and data transformation, a generic model validation method was proposed and applied to decrease the number of required experimental tests [25]. As an important part of probability mathematics, the Bayesian theory also played a prominent role in model validation [26], [27], [28], [29]. Babuska et al. presented a systematic probabilistic approach, where the rejection procedures during model validation were carried out by Bayesian updates and the posterior density was obtained by accreditation experiments [30]. Using a likelihood ratio as the model assessment metric, Jiang and Mahadevan proposed a Bayesian risk decision method for model validation under random uncertainty [31]. In response to prior information and noisy measurement data, Rosic et al. presented a sampling-free Bayesian updating method based on polynomial chaos representations [32]. Sankararaman et al. investigated a Bayesian-based methodology for uncertain model assessment in fatigue crack growth, where Bayesian hypothesis tests and Bayes factor metric were adopted to quantify the confidence level of model prediction [33].
In summary, the available research on uncertainty-based model validation mainly focuses on a probabilistic framework, where uncertainties are usually characterized by the probability theory. To obtain more accurate results by probabilistic approaches, a large volume of sample information is required to construct the primary probabilistic characteristics of uncertain parameters at the early stage. Unfortunately, for many engineering problems, sufficient data from a large number of physical experiments is not always available or very expensive to obtain. Faced with the scarcity of sample data, the non-probabilistic interval theory represents a more precise uncertainty characterization than traditional probabilistic methods, since only the lower and upper bounds of uncertain parameters are required to be determined [34]. Along with the widespread concern in the last two decades, the imprecise probability theory, where the probability distribution parameters are considered to be uncertain instead of crisp real values, has received many investigations in uncertainty quantification [35], [36], [37], [38], [39]. Meanwhile, its application in model validation attracts increasing attention from scholars [29]. By comparing cumulative distribution functions from the simulation and the experiment, Ferson et al. proposed an area validation metric [40]. This metric represents an empirical assessment of the model-form uncertainty and provides an interpretation of the evidence for disagreement between simulation results and experimental measurements. Similarly, for the imprecise probability models with interval-valued parameters, a novel validation metric was defined based on the shortest distance between two intervals [41]. Based on the concept of interval fitting degree, Wang et al. presented a new quantitative validation metric, by which the agreement between computational and experimental response intervals could be evaluated [42]. Sankararaman and Mahadevan developed a new methodology to assess the validity of computational models when the random input variables were affected by some epistemic uncertainties, such as interval data, sparse point data, and probability distributions with parameter uncertainty [43]. Despite these important achievements, model validation under the imprecise probability theory framework still remains at the preliminary research stage.
On account of the limited experimental data, this study proposes a novel model validation and calibration method under the combined interval-Bayesian framework. The layout of this paper is as follows. A brief review on interval-based uncertainty quantification is provided in Section 2. Then by means of the available response information, the process of interval model calibration in Bayesian analysis framework is described in Section 3. Section 4 defines an interval model validation metric. Furthermore, an efficient interval sampling method is presented for response prediction. The Sandia thermal challenge problem is investigated in Section 5 to verify the proposed method. Finally, this paper closes with a brief discussion.
Section snippets
Interval-based uncertainty quantification
As mentioned above, the non-probabilistic interval theory enables a more precise characterization of the uncertainty with limited experimental data. As the basis of subsequent research work, this section first reviews several fundamental concepts of interval theory. Moreover, a new interval quantification method is introduced by using the mean value and standard deviation of the available sample data [42], [44].
In interval mathematics, an interval variable aI in the real number field R can
Interval model calibration based on the Bayesian theory
For an input-process-output system with uncertainty, quantification accuracy of the uncertain input parameters is one of the key factors for the establishment of the computational model. In addition to prior information (such as experimental data) about the input parameters, the available posterior information (such as output responses) is sometimes useful to improve uncertain parameter quantification [30]. Thus, under the framework of interval-based uncertainty quantification in Section 2, the
Response prediction via interval sampling method
When the system output responses have been numerically calculated by the computational model, another concern of engineers is how to compare them with the experimental measurements under the interval theory-based framework. This is also an important aspect of model validation and calibration. Without loss of generality, this study assumes that the computational model for response prediction can be written as the following implicit function:where x represents the design parameter
Numerical example
To verify the feasibility of the proposed method, the thermal challenge problem presented by the Sandia National Laboratories [52] will be investigated in this section.
Conclusions
By combining the non-probabilistic interval uncertainty theory with the Bayesian analysis strategy, this paper develops a novel approach for model validation and calibration under the premise of limited experimental data. In summary, the following conclusions can be derived from this study:
- (1)
The non-probabilistic interval theory is utilized as uncertainty modeling strategy, which is more suitable to handle insufficient information with limited sample data. Based on the mean value and standard
Acknowledgment
This work was supported by the Alexander von Humboldt Foundation.
References (54)
- et al.
CFD simulation of industrial bubble columns: Numerical challenges and model validation successes
Appl Math Model
(2017) - et al.
Model validation and calibration based on component functions of model output
Reliab Eng Syst Saf
(2015) - et al.
Validation challenge workshop
Comput Methods Appl Mech Eng
(2008) Preface: Sandia National Laboratories Validation Challenge Workshop
Comput Methods Appl Mech Eng
(2008)- et al.
Separation of aleatory and epistemic uncertainty in probabilistic model validation
Reliab Eng Syst Saf
(2016) - et al.
Novel reliability-based optimization method for thermal structure with hybrid random, interval and fuzzy parameters
Appl Math Model
(2017) - et al.
Evidence-theory-based model validation method for heat transfer system with epistemic uncertainty
Int J Therm Sci
(2018) - et al.
Probabilistic model validation for uncertain nonlinear systems
Automatica
(2014) - et al.
Probabilistic risk assessment based model validation method using Bayesian network
Reliab Eng Syst Saf
(2018) - et al.
Quantitative model validation techniques: new insights
Reliab Eng Syst Saf
(2013)
A systematic approach to model validation based on Bayesian updates and prediction related rejection criteria
Comput Methods Appl Mech Eng
Bayesian risk-based decision method for model validation under uncertainty
Reliab Eng Syst Saf
Sampling-free linear Bayesian update of polynomial chaos representations
J Comput Phys
Uncertainty quantification and model validation of fatigue crack growth prediction
Eng Fract Mech
Mixed aleatory-epistemic uncertainty quantification with stochastic expansions and optimization-based interval estimation
Reliab Eng Syst Saf
Hybrid uncertain analysis for steady-state heat conduction with random and interval parameters
Int J Heat Mass Tranf
Hybrid uncertain analysis of acoustic field with interval random parameters
Comput Methods Appl Mech Eng
Model validation and predictive capability for the thermal challenge problem
Comput Methods Appl Mech Eng
Epistemic uncertainty-based model validation via interval propagation and parameter calibration
Comput Methods Appl Mech Eng
Model validation under epistemic uncertainty
Reliab Eng Syst Saf
The use of kernel densities and confidence intervals to cope with insufficient data in validation experiments
Comput Methods Appl Mech Eng
Formulation of the thermal problem
Comput Methods Appl Mech Eng
Optimization of structures with uncertain constraints based on convex model and satisfaction degree of interval
Comput Methods Appl Mech Eng
Thermal challenge problem: summary
Comput Methods Appl Mech Eng
Verification and validation in computational science and engineering
Verification and validation of simulation models
J Simul
Nonlinear model validation using correlation tests
Int J Control
Cited by (27)
Non-probabilistic sensitivity analysis method for multi-input-multi-output structures considering correlations
2024, International Journal of Mechanical SciencesA bivariate subinterval method for dynamic analysis of mechanical systems with interval uncertain parameters
2023, Communications in Nonlinear Science and Numerical SimulationReliability-constrained optimal attitude-vibration control for rigid-flexible coupling satellite using interval dimension-wise analysis
2023, Reliability Engineering and System SafetyValidation experiment design of degradation models for composite materials with interval uncertainties
2023, Composite StructuresA novel analysis method for vibration systems under time-varying uncertainties based on interval process model
2022, Probabilistic Engineering MechanicsCitation Excerpt :Therefore, probabilistic approaches may confront challenges in uncertain analysis in these situations [8]. To deal with the problem of incomplete information on uncertain parameters due to limitations of test conditions or unacceptable costs in practical engineering, non-probabilistic models, such as the interval model [9–12], the convex model [13–17] and the fuzzy sets [18–20], have been developed in recent years. These non-probabilistic models greatly reduce the dependence on experimental samples and facilitate uncertainty analysis with limited data.
Statistical model calibration and design optimization under aleatory and epistemic uncertainty
2022, Reliability Engineering and System Safety