Novel model calibration method via non-probabilistic interval characterization and Bayesian theory

https://doi.org/10.1016/j.ress.2018.11.005Get rights and content

Highlights

  • A combined interval-Bayesian framework is constructed for model calibration.

  • Interval parameters via Bayesian update give a faithful uncertainty representation.

  • An unbiased estimation is presented for interval quantification with limited data.

  • An interval sampling method is developed for interval response prediction.

Abstract

For many uncertainty-based engineering practices, the information or experimental data used to construct the uncertainty analysis model are often deficient, thus rendering traditional probabilistic methods ineffective. In this context, this paper proposes a novel model calibration method that combines non-probabilistic interval technology with Bayesian analysis theory. First, based on both the mean value and the standard deviation of the available sample data, a new interval quantification method is introduced to approximately describe the bounds of the uncertain input parameters. Then, via the well-known Kennedy and O'Hagan model and Bayesian theory, an interval parameter calibration framework is constructed that can be used to increase the agreement between experimental response measurements and computational response results. To improve the execution time of the uncertain response prediction with respect to interval parameters, an efficient interval sampling method is proposed that utilizes interval endpoints and extreme points. Finally, the feasibility of the proposed method is demonstrated using the renowned Sandia thermal challenge problem.

Introduction

Experimental tests are an important means in engineering analysis; however, the expenses caused by extensive tests are always considerable [1], [2]. Moreover, in a complex environment, certain physical quantities are difficult to directly measure. Nowadays, with the rapid development of computing capacity, computational models play an increasingly prominent role in complex engineering systems. Before extending a computational model toward its practical application, a crucial question needs to be considered [3], [4], [5]: how can the confidence in modeling and simulating accuracy be evaluated? Model validation, defined as “the process of determining the degree to which a computational model is an accurate representation of the real world” [6], is an efficient technique to assess the correctness of the model for the intended purpose. This is often contrasted with model verification, which is the assessment of the solution accuracy of a model. Moreover, model validation is often implemented together with model calibration, which seeks to correct the model by adjusting the model parameters. Recently, model validation has become an increasingly important subject in both academic and industrial research [7], [8], [9]. In the area of computational mechanics, the American Institute of Aeronautics and Astronautics (AIAA) and the American Society of Mechanical Engineers (ASME) formulated long-term plans and published specific guidance documents for model validation [6], [10]. In 2000, the concept of model validation has been particularly emphasized in the Accelerated Strategic Computing Initiative program by the U.S. Department of Energy (DoE) [11]. To increase the general awareness of various available methods for model validation, the Sandia National Laboratories have organized a workshop in 2006, where three challenge problems were provided for discussion [12], [13].

The available model validation methods can be grouped into two categories: the deterministic framework and the uncertain framework [14]. Many conventional model validation activities are typically conducted within a deterministic framework, where information of both experimental data and numerical prediction is considered to be unambiguous [15]. However, various uncertainties are unavoidable in practical engineering due to manufacturing errors, measurement inaccuracy, and incomplete knowledge [16], [17]. Comparatively speaking, the uncertainty-based model validation, where a more comprehensive account for the presence and importance of uncertainties is needed, will be more practical in the context of the modern industry [18], [19], [20], [21], [22]. In the research work of Kennedy and O'Hagan, several different sources of uncertainties were characterized from the computational models and experimental observations [23]. Focusing on the stochastic validation associated with uncertainty in both predictions and experiments, Chen et al. introduced several mathematical examples to assess the four main types of validation metrics [24]. Using the stochastic uncertainty propagation and data transformation, a generic model validation method was proposed and applied to decrease the number of required experimental tests [25]. As an important part of probability mathematics, the Bayesian theory also played a prominent role in model validation [26], [27], [28], [29]. Babuska et al. presented a systematic probabilistic approach, where the rejection procedures during model validation were carried out by Bayesian updates and the posterior density was obtained by accreditation experiments [30]. Using a likelihood ratio as the model assessment metric, Jiang and Mahadevan proposed a Bayesian risk decision method for model validation under random uncertainty [31]. In response to prior information and noisy measurement data, Rosic et al. presented a sampling-free Bayesian updating method based on polynomial chaos representations [32]. Sankararaman et al. investigated a Bayesian-based methodology for uncertain model assessment in fatigue crack growth, where Bayesian hypothesis tests and Bayes factor metric were adopted to quantify the confidence level of model prediction [33].

In summary, the available research on uncertainty-based model validation mainly focuses on a probabilistic framework, where uncertainties are usually characterized by the probability theory. To obtain more accurate results by probabilistic approaches, a large volume of sample information is required to construct the primary probabilistic characteristics of uncertain parameters at the early stage. Unfortunately, for many engineering problems, sufficient data from a large number of physical experiments is not always available or very expensive to obtain. Faced with the scarcity of sample data, the non-probabilistic interval theory represents a more precise uncertainty characterization than traditional probabilistic methods, since only the lower and upper bounds of uncertain parameters are required to be determined [34]. Along with the widespread concern in the last two decades, the imprecise probability theory, where the probability distribution parameters are considered to be uncertain instead of crisp real values, has received many investigations in uncertainty quantification [35], [36], [37], [38], [39]. Meanwhile, its application in model validation attracts increasing attention from scholars [29]. By comparing cumulative distribution functions from the simulation and the experiment, Ferson et al. proposed an area validation metric [40]. This metric represents an empirical assessment of the model-form uncertainty and provides an interpretation of the evidence for disagreement between simulation results and experimental measurements. Similarly, for the imprecise probability models with interval-valued parameters, a novel validation metric was defined based on the shortest distance between two intervals [41]. Based on the concept of interval fitting degree, Wang et al. presented a new quantitative validation metric, by which the agreement between computational and experimental response intervals could be evaluated [42]. Sankararaman and Mahadevan developed a new methodology to assess the validity of computational models when the random input variables were affected by some epistemic uncertainties, such as interval data, sparse point data, and probability distributions with parameter uncertainty [43]. Despite these important achievements, model validation under the imprecise probability theory framework still remains at the preliminary research stage.

On account of the limited experimental data, this study proposes a novel model validation and calibration method under the combined interval-Bayesian framework. The layout of this paper is as follows. A brief review on interval-based uncertainty quantification is provided in Section 2. Then by means of the available response information, the process of interval model calibration in Bayesian analysis framework is described in Section 3. Section 4 defines an interval model validation metric. Furthermore, an efficient interval sampling method is presented for response prediction. The Sandia thermal challenge problem is investigated in Section 5 to verify the proposed method. Finally, this paper closes with a brief discussion.

Section snippets

Interval-based uncertainty quantification

As mentioned above, the non-probabilistic interval theory enables a more precise characterization of the uncertainty with limited experimental data. As the basis of subsequent research work, this section first reviews several fundamental concepts of interval theory. Moreover, a new interval quantification method is introduced by using the mean value and standard deviation of the available sample data [42], [44].

In interval mathematics, an interval variable aI in the real number field R can

Interval model calibration based on the Bayesian theory

For an input-process-output system with uncertainty, quantification accuracy of the uncertain input parameters is one of the key factors for the establishment of the computational model. In addition to prior information (such as experimental data) about the input parameters, the available posterior information (such as output responses) is sometimes useful to improve uncertain parameter quantification [30]. Thus, under the framework of interval-based uncertainty quantification in Section 2, the

Response prediction via interval sampling method

When the system output responses have been numerically calculated by the computational model, another concern of engineers is how to compare them with the experimental measurements under the interval theory-based framework. This is also an important aspect of model validation and calibration. Without loss of generality, this study assumes that the computational model for response prediction can be written as the following implicit function:f(x,zc(x,α))=0where x represents the design parameter

Numerical example

To verify the feasibility of the proposed method, the thermal challenge problem presented by the Sandia National Laboratories [52] will be investigated in this section.

Conclusions

By combining the non-probabilistic interval uncertainty theory with the Bayesian analysis strategy, this paper develops a novel approach for model validation and calibration under the premise of limited experimental data. In summary, the following conclusions can be derived from this study:

  • (1)

    The non-probabilistic interval theory is utilized as uncertainty modeling strategy, which is more suitable to handle insufficient information with limited sample data. Based on the mean value and standard

Acknowledgment

This work was supported by the Alexander von Humboldt Foundation.

References (54)

  • I. Babuska et al.

    A systematic approach to model validation based on Bayesian updates and prediction related rejection criteria

    Comput Methods Appl Mech Eng

    (2008)
  • X. Jiang et al.

    Bayesian risk-based decision method for model validation under uncertainty

    Reliab Eng Syst Saf

    (2007)
  • B.V. Rosic et al.

    Sampling-free linear Bayesian update of polynomial chaos representations

    J Comput Phys

    (2012)
  • S. Sankararaman et al.

    Uncertainty quantification and model validation of fatigue crack growth prediction

    Eng Fract Mech

    (2011)
  • M.S. Eldred et al.

    Mixed aleatory-epistemic uncertainty quantification with stochastic expansions and optimization-based interval estimation

    Reliab Eng Syst Saf

    (2011)
  • C. Wang et al.

    Hybrid uncertain analysis for steady-state heat conduction with random and interval parameters

    Int J Heat Mass Tranf

    (2015)
  • B. Xia et al.

    Hybrid uncertain analysis of acoustic field with interval random parameters

    Comput Methods Appl Mech Eng

    (2013)
  • S. Ferson et al.

    Model validation and predictive capability for the thermal challenge problem

    Comput Methods Appl Mech Eng

    (2008)
  • C. Wang et al.

    Epistemic uncertainty-based model validation via interval propagation and parameter calibration

    Comput Methods Appl Mech Eng

    (2018)
  • S. Sankararaman et al.

    Model validation under epistemic uncertainty

    Reliab Eng Syst Saf

    (2011)
  • H.J. Pradlwarter et al.

    The use of kernel densities and confidence intervals to cope with insufficient data in validation experiments

    Comput Methods Appl Mech Eng

    (2008)
  • K.J. Dowding et al.

    Formulation of the thermal problem

    Comput Methods Appl Mech Eng

    (2008)
  • C. Jiang et al.

    Optimization of structures with uncertain constraints based on convex model and satisfaction degree of interval

    Comput Methods Appl Mech Eng

    (2007)
  • R.G. Hills et al.

    Thermal challenge problem: summary

    Comput Methods Appl Mech Eng

    (2008)
  • P.J. Roache

    Verification and validation in computational science and engineering

    (1998)
  • R.G. Sargent

    Verification and validation of simulation models

    J Simul

    (2013)
  • S.A. Billings et al.

    Nonlinear model validation using correlation tests

    Int J Control

    (1994)
  • Cited by (27)

    • A novel analysis method for vibration systems under time-varying uncertainties based on interval process model

      2022, Probabilistic Engineering Mechanics
      Citation Excerpt :

      Therefore, probabilistic approaches may confront challenges in uncertain analysis in these situations [8]. To deal with the problem of incomplete information on uncertain parameters due to limitations of test conditions or unacceptable costs in practical engineering, non-probabilistic models, such as the interval model [9–12], the convex model [13–17] and the fuzzy sets [18–20], have been developed in recent years. These non-probabilistic models greatly reduce the dependence on experimental samples and facilitate uncertainty analysis with limited data.

    View all citing articles on Scopus
    View full text