The Method of Manufactured Universes for validating uncertainty quantification methods

https://doi.org/10.1016/j.ress.2010.11.012Get rights and content

Abstract

The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework calls for a manufactured reality from which “experimental” data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which simulation results are created (possibly with numerical error), the application of a system for quantifying uncertainties in model predictions, and an assessment of how accurately those uncertainties are quantified. The application presented in this paper manufactures a particle-transport “universe”, models it using diffusion theory with uncertain material parameters, and applies both Gaussian process and Bayesian MARS algorithms to make quantitative predictions about new “experiments” within the manufactured reality. The results of this preliminary study indicate that, even in a simple problem, the improper application of a specific UQ method or unrealized effects of a modeling assumption may produce inaccurate predictions. We conclude that the validation framework presented in this paper is a powerful and flexible tool for the investigation and understanding of UQ methodologies.

Introduction

The past decade has seen rapid advancement in complex computational projects and increasing dependence on these projects to support high-consequence decisions. An immediate result of this trend is the need for improved uncertainty quantification (UQ) methods to accompany the scientific simulations such that they deliver not only the best estimate of some quantity of interest, but also a measure of uncertainty in that estimate. One important example of UQ methods development is the quantification of margins and uncertainties (QMU) framework employed by the National Nuclear Security Administration's (NNSA) laboratories for assessment of the nation's nuclear weapon stockpile. This framework is a collection of methodologies designed to fuse decision inputs, such as experimental results, simulation results, theoretical understandings, and expert judgment, and their associated uncertainties in support of stockpile decisions.

Since its inception, the QMU framework has become an increasingly important link between scientific activities and stockpile stewardship priorities. Also of increasing importance, however, is the requirement that decision-support frameworks, like QMU, are themselves subjected to rigorous verification and validation assessments. In 2006, Congress issued a mandate for the National Academies to review the QMU framework and the consistency of its implementation at the national security laboratories [1]. Simply put, the review committee was tasked with deciding whether the combination of advanced simulation techniques, existing testing data, expert judgment, and the QMU framework appropriately support assessment and certification decisions in the absence of underground testing.

This QMU initiative is an example of the fundamental challenge to the predictive science and engineering community: How can we predict the behavior of complex systems using simulation and how can we assess our predictive capabilities? In recent years, the community released a number of predictive tools that attempt to infer the relationship between simulation and reality and use that inference to forecast uncertainty in predictions of new simulations or experiments. Validation of these predictive tools, however, is often hindered by little and/or uncertain experimental data or overwhelming complexities associated with real-world problems of interest. Nonetheless, validation is a fundamental requirement that provides confidence in predictive models and allows for an unbiased, knowledgeable evaluator to determine the credibility of that confidence.

With this motivation in mind, we present the Method of Manufactured Universes (MMU) as a framework that facilitates a comprehensive validation study of a given UQ method, perhaps as implemented in a given software system. To apply MMU, one defines the laws that govern a system, uses these laws to construct “experimental” results, simulates the “experiments” using some computational model, and then tests the ability of the given UQ method to quantify the difference between simulation and “reality”. This paper presents preliminary results from a computationally simple yet rich “universe” in which two UQ methodologies are examined: first, a Gaussian process code [2], [3], [4] from Los Alamos National Laboratory (LANL) and second, a Bayesian Multivariate Adaptive Regression Spline [5] (BMARS) technique combined with a filtering/weighting method. The conclusion drawn from these results is that MMU is a powerful technique that can help identify problems in UQ software, help computational scientists and engineers understand the subtleties, strengths, and weaknesses of various UQ methodologies, and help decision-makers to evaluate the credibility of predictive statements.

In Section 2 we define MMU in more detail, and in Section 3 we describe the manufactured universe that serves as the example in this paper. We also define the approximate mathematical model of the manufactured reality. In Section 4 we describe the two UQ methodologies that we use as examples in this paper. Sections 5 and 6 contain results from these methodologies, and Section 7 contains conclusions.

Section snippets

Introduction of the Method of Manufactured Universes

The motivation for this framework is the need to understand the assumptions embedded in UQ methods and the manner in which the effects of these assumptions propagate to the method's output. For example, a common practice for describing an unknown distribution (prior distribution or output uncertainty, for example) is to assume a Gaussian distribution with some estimated mean and standard deviation. The underlying function, however, may have only finite support and may be asymmetric; this

Description of the computational models in the particle-transport universe

The experimental data will be generated using analytic solutions of the S8 equations; the simulation will be computed using analytic solutions of the diffusion equation. The next two sections outline these equations and their solutions in 1D slab geometry.

The GPMSA algorithm

The GPMSA code from Los Alamos National Laboratory is derived from the Kennedy and O’Hagen univariate (i.e., scalar output) formulation for predictions of experimental measurements [2]:Y(xi)=η(xi,θ)+δ(xi)+ei,where Y is the experimental value being computed or estimated, xi is a vector of independent inputs, θ is a vector of calibration parameters, η(xi,θ) is an emulator of the simulation response to inputs xi and θ, δ(xi) is a model-discrepancy function, and ei is a random error

Analysis of the GPMSA software applied to the simple universe

For our first exercise of the MMU framework, we generate data in the simplest setting available: experimental data with no imposed measurement error or noise and no hidden variables. Moreover, we consider only the reflected particle flow rate as a response function. We select the GPMSA software as the first UQ model.

Analysis of the BMARS algorithm applied to the simple universe

As described in Section 4.2, the Bayesian MARS algorithm uses a specific process to randomly construct a regression spline function that fits a set of training data. In our implementation, we construct such a function as an emulator for the diffusion simulation. The training inputs are the same as those described in Table 2 with the exception that the BMARS emulator does not distinguish between independent and uncertain inputs (thus, we have a 3D emulator). We have removed the experimental

Summary and conclusions

Predictive science and engineering (PS&E) is a young discipline. Its goal can be stated simply: to use simulations along with measurements from past experiments to predict new experimental outcomes. There remains much to learn about existing methodologies for PS&E, and there are undoubtedly improvements yet to be developed. In this paper we have presented the “Method of Manufactured Universes” (MMU), which we offer as a framework to help with the evaluation and improvement of PS&E

Acknowledgments

We thank Derek Bingham (Simon Fraser University) for helpful conversations and for sharing some helpful UQ software. Work by the first author was supported by the Department of Energy Computational Science Graduate Fellowship program under Grant no. DE-FG02-97ER25308. Work by other authors was partially supported by the Predictive Sciences Academic Alliances Program in DOE NNSA-ASC under Grant DE-FC52-08NA28616, and the Lawrence Livermore National Laboratory. This publication is based in part

References (15)

  • The National Research Council. Evaluation of quantification of margins and uncertainties methodology for assessing and...
  • M.C. Kennedy et al.

    Bayesian calibration of computer models

    Journal of the Royal Statistical Society (Series B)

    (2001)
  • D. Higdon et al.

    Combining field data and computer simulations for calibration and prediction

    SIAM Journal of Scientific Computing

    (2004)
  • D. Higdon et al.

    Computer model calibration using high dimensional output

    Journal of the American Statistical Association

    (2008)
  • D. Denison et al.

    Bayesian MARS

    Statistics and Computing

    (1998)
  • K.M. Case et al.

    Linear transport theory

    (1967)
  • Maslowski AE, Adams ML. Behavior of continuous finite element discretizations of the slab-geometry transport equation....
There are more references available in the full text version of this article.

Cited by (16)

  • Non-deterministic model validation methodology for simulation-based safety assessment of automated vehicles

    2021, Simulation Modelling Practice and Theory
    Citation Excerpt :

    This guarantees the transferability of the findings from the manufactured universe to the actual model validation. Stripling et al. [16] analyze UQ methods in a particle transport universe by comparing the actual low-fidelity model with a high-fidelity reference. Whiting et al. [17] create a CFD universe to compare four validation methodologies including PBA under validation and prediction conditions.

  • Wilks’ formula applied to computational tools: A practical discussion and verification

    2019, Annals of Nuclear Energy
    Citation Excerpt :

    This can include both real uncertainty quantification studies or the use of the Method of Manufactured Universes (Stripling et al., 2011).

  • An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes

    2016, Journal of Computational Physics
    Citation Excerpt :

    In our fourth example, we illustrate the framework for a particle transport model quantifying angular flux in a 1-D slab. The purpose of this example is to illustrate the similarities between our high-to-low framework and the Method of Manufactured Universes [16]. For our final example, we employ the high-fidelity thermal-hydraulics code Hydra-TH to simulate laminar flow in a pipe.

  • Assessment of Model Validation, Calibration, and Prediction Approaches in the Presence of Uncertainty

    2023, Journal of Verification, Validation and Uncertainty Quantification
View all citing articles on Scopus
View full text