1 Introduction

Recent events in space, including the collision of Russia’s Cosmos 2251 satellite with Iridium 33 and China’s Feng Yun 1C anti-satellite demonstration, have stressed the capabilities of the space community and its ability to provide accurate and actionable impact probability estimates because of the additional debris objects that were generated. For example, the Space Surveillance Network (SSN) has the unique challenge of tracking more than 22,000 Space Objects (SOs) and providing critical collision avoidance warnings to military, NASA, and commercial operators. However, due to the large number of SOs and the limited number of sensors available to track them, it is impossible to maintain persistent surveillance resulting in large observation gaps [1]. This inherent latency in the catalog information results in sparse observations and large propagation intervals between measurements and close approaches. The large propagation intervals coupled with nonlinear SO dynamics results in highly non-Gaussian probability distribution functions (pdfs). In particular, satellites in Low-Earth Orbit (LEO) are heavily influenced by atmospheric drag which is difficult to model [2]. Uncertainties in atmospheric drag must be folded into estimation models to accurately represent the position uncertainties for calculating impact probabilities or conjunction assessments (CA). This process then separates naturally into a prediction and correction cycle, where estimates are used to predict the orbital position at a future time and observations are used to improve or correct these predictions while decreasing uncertainty. The difficulty in this process lies in representing the non-Gaussian uncertainty and accurately propagating it [3]. Accurate assessment of confidence in position knowledge will be a significant improvement, particularly for the space situational awareness (SSA) community. The contribution of this chapter is the application of PC [4] and GMM-PC [5] to satellite tracking with upper atmospheric UQ.

A number of upper atmospheric models exist which can be classified as either empirical or physics-based models [6, 7]. The current Air Force standard is the High Accuracy Satellite Drag Model (HASDM) [8], which is an empirical model based on observations of calibration satellites. These satellite observations are used to determine atmospheric model parameters based on their orbit determination solutions. Although the HASDM model is accurate for determining the current state of the upper atmospheric environment, it has no forecasting capability which limits its effectiveness for CA calculations. A number of physics-based models exist, two of which are the Global Ionosphere-Thermosphere Model (GITM) [9] and the Thermosphere-Ionosphere-Electrodynamics General Circulation Model (TIE-GCM) [6, 7]. These are physics-based models that solve the full Navier-Stokes equations for density, velocity, and temperature for a number of neutral and charged chemical species components. The improved modeling and prediction capabilities of these models come at a high computational cost. The models are very high-dimensional, solving Navier-Stokes equations over a discretized spatial grid involving 2000–10,000 state variables and 12–20 inputs and internal parameters. Satellite CA calculations usually involves long propagation intervals (3–8 days) resulting in nonlinear transformation and non-Gaussian errors.

This nonlinearity and high-dimensionality results in the so-called curse of dimensionality [10], where the combination of increasing problem dimension and order of nonlinearity, causes the number of required evaluations to grow in a super-linear manner. The curse of dimensionality as related to atmospheric models is a difficulty that this chapter addresses. Additionally, CA requires full knowledge of the probability density function (pdf) to calculate the impact probability as opposed to traditional state estimation and data assimilation approaches which only require the first two moments (mean and covariance). This chapter presents a new approach that solves for the full pdf.

A common but computationally intensive method of propagating uncertainty is the use of Monte Carlo (MC) simulations [11, 12]. Randomly generated samples from the initial uncertainty distribution are propagated through the function of interest. MC approaches require on the order of millions of propagationsFootnote 1 to generate statistically valid UQ solutions. Parallelizing the computations on multiprocessor central (CPUs) or on Graphics Processing Units (GPUs) reduces the runtime of the simulations significantly [13,14,15] at the cost of increasing the difficulty of implementation [16]. Reducing the number of sample points required for a result with satisfactory confidence bounds is possible through importance sampling. Although the computational cost can be prohibitive for most applications due to the slow convergence, the generality of MC techniques makes them an ideal benchmark to compare other methods.

A spectrum of techniques exists that propagate the state and uncertainty of an initially Gaussian distribution through a nonlinear function, such as orbit propagation [17]. Computational cost is traded for accuracy of the pdf. Using the first order Taylor series expansion of the dynamics to linearly propagate the covariance matrix lies on one extreme; while the MC simulation lies on the other extreme of computational cost. Two techniques that occupy a range within this spectrum of computational cost are Gaussian Mixture Models (GMMs) [18] and Polynomial Chaos (PC) expansion [4].

GMMs can approximate any pdf using a weighted sum of Gaussian distributions with the approximation improving in an L 1-norm sense with increasing number of elements [18]. When the initial distribution is Gaussian, the approximate GMM for this case has spatially distributed means and each element has smaller variance than that of the initial Gaussian distribution (i.e., differential entropy). Using the GMM approximation, each Gaussian component is propagated through the nonlinear function using State Transition Tensors (STTs) [19], sigma-point based methods [20, 21], quadrature, or cubatureFootnote 2 [22,23,24]. Each element has a smaller uncertainty than the initial Gaussian distribution at epoch and therefore, the Gaussian assumption for each element should hold for propagation times that are at least as long, or longer than the original distribution. The weighted sum of the Gaussian elements after propagation approximates the resulting forecast pdf while having the ability to approximate non-Gaussian distributions. GMMs have been successfully used in many uncertainty propagation applications such as orbit estimation [25, 26], orbit determination [27, 28], and conjunction assessment [29, 30].

The PC [4] approach uses orthogonal polynomial (OP) expansions as a surrogate model for quantifying uncertainty. The most suited polynomial is chosen using the Wiener-Askey scheme and depends on the initial uncertainty distribution [31]. It is also possible to compute optimal orthogonal polynomials for arbitrary pdfs that are not part of the Wiener-Askey scheme using arbitrary PC (aPC) [32]. For Gaussian distributions, Hermite polynomials are the corresponding OPs [4, 31]. For the multidimensional case, the coefficients of the multivariate polynomials are computed such that a mapping of the random variable from the initial time to the final time is approximated. Once the polynomial coefficients are computed, sampling from the PC polynomial approximation generally has a lower computational cost than a full-blown MC run. The PC approach has been used in many fields for uncertainty quantification of computationally intensive models [33,34,35,36]. In orbital mechanics, PC has been previously used for uncertainty propagation [37, 38] and conjunction assessment [39, 40].

Reference [5] used PC and GMM in a hybrid fashion to quantify state uncertainty for spacecraft. Including a GMM with the PC (GMM-PC) was shown to reduce the overall order required to achieve a desired accuracy. Reference [5] converted the initial distribution into a GMM, and PC was used to propagate each of the elements. Splitting the initial distribution into a GMM reduces the size of the covariance associated with each element and therefore, lower order polynomials can be used. The GMM-PC effectively reduces the function evaluations required for accurately describing a non-Gaussian distribution that results from the propagation of a state with an initial Gaussian distribution through a nonlinear function. The current chapter uses the GMM-PC method, developed by Ref. [5], for the satellite orbital UQ with atmospheric drag. Additionally, the PC method is used for UQ applied to upper atmospheric models without splitting the initial uncertainty into a GMM.

The organization of this chapter is as follows. First, the GMMs are discussed. Next, the PC approach is outlined and discussed. Following this the GMM-PC approach is discussed. Additionally, results are shown for simulated examples for both orbital and atmospheric UQ. Finally, discussions and conclusions are provided.

2 Gaussian Mixture Models

A GMM approximates any PDF in an L 1-distance sense by using a weighted sum of Gaussian probability distribution functions [18].

$$\displaystyle \begin{aligned} p\left(\mathbf{x}\right)={\displaystyle \sum_{i=1}^{N}\alpha_{i}p_{g}\left(\mathbf{x};\boldsymbol{\mu}_{i},{\mathbf{P}}_{i}\right)} \end{aligned} $$
(4.1)

where \(p_{g}\left (\mathbf {x};\boldsymbol {\mu }_{i},{\mathbf {P}}_{i}\right )\) is a multivariate Gaussian pdf with mean μ i and covariance P i, N is the number of Gaussian probability distribution functions, and α i is a positive non-zero weight, which satisfies the following constraint:

$$\displaystyle \begin{aligned} \sum_{i=1}^{N}\alpha_{i}=1 \end{aligned} $$
(4.2)

where ∀α i > 0. For uncertainty propagation, the initial Gaussian distribution is split into a GMM and each element is propagated through the nonlinear function. Standard Gaussian propagation techniques such as State Transition Matrices (STTs) [19] or sigma-point methods [20, 21] are commonly used to approximate the Gaussian elements post propagation. Although each element remains Gaussian, the weighted sum forms a non-Gaussian approximation of the true distribution. Modifications of this procedure exist, where the weights can be updated post-propagation [41] or the elements can be further split into more elements or merged mid-propagation [25]. However, these modifications are not considered for this work.

Instead of forming a GMM approximation of the initial multivariate Gaussian distribution, a univariate GMM library of the standard normal distribution is formed [25, 27, 30, 42]. The univariate library is applied along a column of the square-root factor of the covariance matrix in order to form a GMM approximation of a multivariate Gaussian. The univariate splitting library has to be computed only once and is stored in the form of a lookup table. Finding the univariate library is converted to an optimization problem where the distance between the GMM and the standard normal distribution is minimized. The L 2 distance is used instead of L 1 because a closed-form solution exists for the L 2 distance between a GMM and a Gaussian distribution. A library where all the standard deviations in the split are the same (homoscedastic), \(\sigma =\sqrt {1/N}\), and odd N up to 39 elements is used in this work [43, 44]. With increasing N, σ decreases and therefore, the differential entropy of each element decreases as seen in Fig. 4.1.

Fig. 4.1
figure 1

Differential entropy per element

To apply the univariate splitting library to a multivariate Gaussian distribution \({\mathbf {p}}_G\sim \mathcal {N}\left (\boldsymbol {\mu },\mathbf {P}\right )\), the univariate splitting library is applied along a column of the square-root S of the covariance matrix:

$$\displaystyle \begin{aligned} \mathbf{P}=\mathbf{S}{\mathbf{S}}^T \end{aligned} $$
(4.3)

For an n-dimensional state, the covariance matrix of each element after the split is:

$$\displaystyle \begin{aligned} {\mathbf{P}}_{i}=\left[{\mathbf{s}}_{1}\ldots\sigma{\mathbf{s}}_{k}\ldots{\mathbf{s}}_{n}\right]\left[{\mathbf{s}}_{1}\ldots\sigma{\mathbf{s}}_{k}\ldots{\mathbf{s}}_{n}\right]^{T} \end{aligned} $$
(4.4)

where s k is the desired column of S that the split is along. The means of the multivariate GMM are:

$$\displaystyle \begin{aligned} \boldsymbol{\mu}_i=\boldsymbol{\mu}+\mu_i{\mathbf{s}}_{k} \end{aligned} $$
(4.5)

where μ i are the univariate library mixture split locations [43, 44]. If Cholesky or spectral decomposition is used to generate S, the possible splitting options are limited to 2n directions. However, it is possible to apply the univariate splitting direction along any desired direction by generating a square-root matrix with one column parallel to the input direction [45]. For extremely non-linear problems, splitting along a single direction may not account for the entire non-linearity of the problem. Therefore, splitting the initial multivariate distribution in multiple directions is required in order to better approximate the non-Gaussian behavior post-propagation [44, 46]. In such cases the splitting library can be applied recursively as a tensor product to split along multiple directions.

3 Polynomial Chaos

The idea of PC originates from a chapter from Norbert Wiener [4], where the term chaos is used to refer to uncertainty. This theory has been used frequently for UQ and is now also being used in the Aerospace field [37, 39, 47,48,49]. In PC, the uncertainty in variables through a transformation is represented by a series of orthogonal polynomials

$$\displaystyle \begin{aligned} u(\xi,t)=\sum_{i=0}^{\infty}c_{i}(t)\Psi_{i}(\xi) \end{aligned} $$
(4.6)

where u(ξ, t) is the output of interest. In Eq. (4.6) ξ is a random variable. The orthogonal polynomials Ψi are defined by the following inner product in a Hilbert space:

$$\displaystyle \begin{aligned} \int_{-\infty}^{\infty}\Psi_{m}(\xi)\Psi_{n}(\xi)w(\xi)=0 \end{aligned} $$
(4.7)

Based on the distribution of the random variable, the orthogonal polynomial type and weighing function, w(ξ) from Eq. (4.7), are chosen from the Weiner-Askey [31] scheme found in Table 4.1.

Table 4.1 The Wiener-Askey scheme where α and β are pdf specific parameters [50]

Since most applications assume the initial distribution to be Gaussian, Hermite polynomials are chosen according to the Wiener-Askey scheme. This chapter applies the normalized probabilists Hermite polynomials where the weight function is changed to:

$$\displaystyle \begin{aligned} w(\xi) = \frac{1}{\sqrt{2\pi}}e^{\frac{-\xi^2}{2}} \end{aligned} $$
(4.8)

The new weight function assumes that the distribution has a mean of 0 and a standard deviation of 1, which effectively normalizes and improves the numerical properties. The normalized Hermite polynomials can be found by using the following recursive relation:

$$\displaystyle \begin{aligned} (n+1)!\times\Psi_{n+1}(\xi)=\xi\Psi_{n}(\xi)-n\Psi_{n-1}(\xi) \end{aligned} $$
(4.9)

where

$$\displaystyle \begin{aligned} \Psi_{0}=1\qquad \Psi_{1}=\xi \end{aligned} $$
(4.10)

In reality, the infinite series from Eq. (4.6) is truncated at some order. The orthogonal univariate Hermite polynomials up to order 5 can be seen in Fig. 4.2. The conjunction problem depends on the full position and velocity of the satellite and is therefore a multivariate problem that requires orthogonal multivariate polynomials. Multivariate polynomials can be created using the multi-index notation. Two-dimensional multivariate polynomials up to order 2 can be seen in Table 4.2. The multivariate polynomial can then be written as:

$$\displaystyle \begin{aligned} u(\boldsymbol{\xi},t)=\sum_{i=0}^{L}c_{i}(t)\Psi_{\alpha i}(\boldsymbol{\xi}) \end{aligned} $$
(4.11)

where L is given by

$$\displaystyle \begin{aligned} L = \frac{(n+l)!}{n!l!} \end{aligned} $$
(4.12)

where n is the dimension of ξ and l is the maximum order of the truncated univariate polynomial. A given order \(\bar {L}\) of the multivariate polynomial equals the sum of the elements of the multi-index vector. If the output is also a vector function of dimension n, u(ξ, t), n × L coefficients c i(t) have to be computed.

Fig. 4.2
figure 2

Normalized probabilists Hermite polynomials

Table 4.2 Two-dimensional multivariate polynomials up to order 2

The final challenge is to determine the coefficients c i(t). The two major methods used to determine these coefficients are the Intrusive method and the Non-intrusive method. The intrusive method requires knowledge of the propagation function that determines the evolution of the random vector of inputs. This then results in a system of equations that need to be solved for c i(t). The intrusive method cannot be used with black-box dynamics, and therefore is not considered in this work. The non-intrusive method does not require any knowledge of the propagation function. Given that we can solve the system for a specified initial condition, we use the projection property (Galerkin Projection) for approximating Eq. (4.11):

$$\displaystyle \begin{aligned} c_{i}(t)=\int u(\xi,t)\Psi_{i}(\xi)p(\xi)d\xi \end{aligned} $$
(4.13)

where p(ξ) is the pdf of ξ.

The coefficients in the non-intrusive method can be solved using either Least Squares (LS), or a quadrature method. When LS is implemented, the initial states are randomly sampled. If the quadrature method is used, the initial states are chosen based on the node locations of the quadrature rule. The number of initial states to be used can be vastly reduced by using Compressive Sampling (CS) when using LS, and by using Sparse Grids (SG) when using the quadrature method. In this work, the quadrature method is used with a Smolyak SG (SSG) [51]. The SSG uses fewer grid points than a full tensor product quadrature as can be seen in Fig. 4.3. In the quadrature method, a grid is generated with N q node points, where each node has a location ξ n and weight q n associated with them. The coefficients c i(t) are then found using the following summation:

$$\displaystyle \begin{aligned} c_{i}(t)=\sum_{n=1}^{N_{q}}q_{n}u(\boldsymbol{\xi}_{n},t)\Psi_{\alpha i}(\boldsymbol{\xi}_{n}) \end{aligned} $$
(4.14)

It should be noted that the node points are generated from a zero mean and identity covariance matrix multivariate distribution for numerical accuracy. The initial points are simply scaled to the actual mean and covariance inside the transformation function u(ξ, t).

Fig. 4.3
figure 3

Difference between a full (red) and sparse (blue) two-dimensional quadrature grid

4 Polynomial Chaos with Gaussian Mixture Models

Reference [5] developed the GMM-PC method and this section provides a brief introduction to the method. Both PC and GMMs can represent non-Gaussian distributions with lower computational cost than that of a full blown MC simulation. However, they both have their limitations. The biggest problem with PC is the curse of dimensionality. The number of coefficients required with increasing order and increasing dimension for multivariate polynomials can be computed from Eq. (4.12) and seen in Fig. 4.4a. The number of nodes where computation has to be carried out also increases rapidly with increasing order and dimension as seen in Fig. 4.4b. When GMMs are used for multivariate applications, the univariate library is applied along one specified direction. Thus, the spectral direction along which the splitting is carried out can play a very important role in the quality of the resulting non-Gaussian distribution after a nonlinear transformation [30].

Fig. 4.4
figure 4

Curse of Dimensionality with Polynomial Chaos. (a) Terms required for multivariate polynomials. (b) Number nodes for a Smolyak grid

A combination of GMMs with PC results in a theory that can outperform each of the separate theories due to them complementing each other [5]. This chapter uses the GMM-PC approach for orbital UQ, while the PC approach is applied to atmospheric density UQ providing the interaction between this two forms of UQ. In the GMM-PC method, each of the mixture elements is represented by a PC expansion. The GMMs splitting reduces the size of the distribution that each PC expansion has to account for. This is analogous to reducing the range for Taylor series expansion (TSE), or the Finite Element Method (FEM). Therefore, we use more simple elements (lower order PC expansions) over smaller subdomains (a GMM) to approximate the final non-Gaussian distribution over a larger domain. The benefit can be seen in a very simple test case where an initial Gaussian distribution of a state in polar coordinates is converted to Cartesian coordinates. Since this transformation is non-linear, the resulting distribution becomes non-Gaussian. The true (MC) and approximated distributions can be seen in Fig. 4.5. The PC approximation is much better than the strictly Gaussian approximation as can be seen in Fig. 4.5a, b. Combining PC and GMM, however, results in a much lower discrepancy between the MC and approximated distributions.

Fig. 4.5
figure 5

True distribution (blue) and approximated distribution (red) after conversion from Polar coordinates to Cartesian coordinates. (a) Gaussian approximation using the Unscented Transform. (b) Polynomial Chaos approximation. (c) GMM approximation with 3 elements. (d) PC GMM approximation with 3 elements. (e) GMM approximation with 5 elements. (f) PC GMM approximation with 5 elements

5 Global Ionosphere-Thermosphere Model

The novel GMM-PC methods implemented for the problem of SSA (orbit estimation and propagation). The major source of uncertainty in orbital propagation is the ionosphere-thermosphere environment. Therefore, this work accurately characterize the uncertainty in the ionosphere-thermosphere through the PC approach. For this purpose, this work uses a physics-based model, the Global Ionosphere-Thermosphere Model (GITM).

The Global Ionosphere-Thermosphere Model (GITM) [9] is a physics based model that solves the full Navier-Stokes equations for density, velocity, and temperature for a number of neutral and charged components. The model explicitly solves for the neutral densities of O, O 2, N(2 D), N(2 T), N(4 S), N 2, NO, H, and He; and the ion species O +(4 S), O +(2 D), O +(2 P), \(O_{2}^{+}\), N +, \(N_{2}^{+}\), NO +, H +, and He +. It also contains chemistry between species of ions and neutrals, ions and electrons, and neutral and neutrals. In addition, GITM self-consistently solves for the neutral, ion, and electron temperature; the bulk horizontal neutral winds; the vertical velocity of the individual species; and the ion and electron velocities. To account for solar activity GITM can use F 10.7 as a proxy EUV spectrum measurements.

Some of the more important features of GITM are: adjustable resolution; non-uniform grid in the altitude and latitude coordinates; the dynamics equations are solved without the assumption of hydrostatic equilibrium; the advection is solved for explicitly, so the time-step in GITM is approximately 2–4 s; the chemistry is solved for explicitly, so there are no approximations of local chemical equilibrium; the ability to choose different models of electric fields and particle precipitation patterns; the ability to start from NRLMSISE-00 empirical model [6, 7] or the international reference ionosphere (IRI) model [52] solutions; and the ability to use a realistic (or ideal) magnetic field determined at the time of the model run. The main parameter of interest is F 10.7, which is a measure of the solar radio flux at 10.7 cm wavelength and is used as a proxy in GITM for solar activity. Figure 4.6 shows the F 10.7 solar radio flux index from 1980 up to approximately 2011, where the 11-year solar cycle is clearly visible in the high and low activity peaks.

Fig. 4.6
figure 6

F 10.7

6 Results

Two simulation studies are conducted, where the first case investigates the orbital position UQ problem, while the second case investigates the atmospheric density UQ problem. The first case uses the GMM-PC approach for the orbital position UQ problem and the second case uses the PC approach (without GMM splitting) to study the atmospheric density UQ problem. The results for these two cases are discussed in this section.

6.1 Orbital Uncertainty Quantification

In this section, a test simulation is carried out to investigate the validity of the GMM-PC method develop by Ref. [5] for an orbital application. The non-linearity of the orbital equations combined with the presence of perturbation such as the atmosphere, make the orbital pdf non-Gaussian with increasing flight time. Thus, this test case propagates a satellite in an almost circular LEO orbit at an altitude of approximately 450 km, under the influence of atmospheric drag simulated using the Jacchia-Bowman 2008 (JB2008) Empirical Thermospheric Density Model [53].

A Gaussian distribution was generated about an initial condition of the orbit. A MC and a GMM-PC simulation was then carried out for 1 day (Fig. 4.7a) and for 5 days (Fig. 4.7b). The simulation was only carried out as a planar 2-dimensional trajectory for simplicity, but can easily be extended to a full 3-dimensional simulation in the future. As can be seen in the results found in Fig. 4.7, the final distribution is highly non-Gaussian. However, the GMM-PC simulation with orders of magnitude fewer runs is able to represent the final distribution well.

Fig. 4.7
figure 7

MC reults (blue) and PC GMM results (red) for the test orbit. (a) Distribution after a time of flight of 1 day. (b) Distribution after a time of flight of 5 days

6.2 Initial Results for Atmospheric Density Forecasting

Low-Earth orbiting (LEO) satellites are heavily influenced by atmospheric drag, which is very difficult to model accurately. One of the main sources of uncertainty is input parameter uncertainty. These input parameters include F10.7, AP, and solar wind parameters. These parameters are measured constantly and these measurements are used to predict what these parameters will be in the future. The predicted values are then used in the physics-based models to predict future atmospheric conditions. Therefore, for the forward prediction of orbital uncertainty, the uncertainty of the atmospheric density due to these parameters must be characterized.

These simulation examples focus on using the PC technique for UQ of physics-based atmospheric models. Unlike the last case this case just studies the use of PC for UQ of the atmospheric density. The PC approach is used to quantify the forecast uncertainty due to uncertainty in F10.7, Ap-index (a measure of the general level of geomagnetic activity over the globe for a given day), and solar wind parameters. The PC approach is used to preform UQ on future atmospheric conditions. As part of this CA process, accurate and consistent UQ is required for the atmospheric models used.

In this section, initial results for the PC UQ applied to the GITM model is discussed. The goal here is to use a physics based atmospheric density model for obtaining accurate density forecast to be used in conjunction assessments. The GITM model has a number of input parameters that can be derived from observations but the model also needs forecasts of its inputs and these forecasted values may be highly uncertain. Therefore, we look at the uncertainty in the forecasted density based on the uncertainty of these inputs. The main input parameter that drives the main dynamics in the GITM model is F10.7 (see Fig. 4.6). Two simulation cases are considered here, the first case uses quiet solar condition model input parameters and the second case uses active solar condition model input parameters. The first case only considers F10.7 as an input parameter. While the second case considers uncertainty in F10.7, Interplanetary Magnetic Field (IMF) in GSM coordinates (nT) (B x, B y, B z), Solar Wind (km/s) V x, and Hemispheric power HPI. The result for this simulation are shown in Fig. 4.8. The time period for the simulations shown is Oct 21–26, 2002.

Fig. 4.8
figure 8

Uncertainty quantification for atmospheric density for Oct 21–26, 2002. (a) Case I: Mean density. (b) Case I: Mean density uncertainty. (c) Case II: Mean density. (d) Case II: Mean density uncertainty

For these simulations, parameters are modeled as constant during forecast but random. In the first case, F10.7 is assumed to have a normal distribution \(\mathcal {N}(165.98,8.34^2)\). For the first case, one dimensional quadrature points are used as simulation ensembles and the PC model is fit using one dimensional Hermite polynomials. The parameters for the second case are modeled as constant during forecast but random. The random variables have the following distribution \(\mathcal {N}({\boldsymbol \mu },P)\), with μ = [165.98, −1.45, 0.06, −0.5, −551.79, 38.07]T and the covariance given by

$$\displaystyle \begin{aligned} \mathbf{P}=\text{diag}\left([8.33^2, 4.84^2, 4.10^2, 2.15^2, 105.1^2, 38.87^2]\right) \end{aligned} $$
(4.15)

For the second case, the Smolyak Sparse Cubature are used as simulation ensembles and fit to multi-dimensional Hermite polynomials. From the figure it is clear that the uncertainty has a complex behavior across geographic locations. Moreover, the difference in the test cases highlight the fact the Solar conditions can drastically effect the model’s accuracy. From the figures it is seen that during storm conditions (Fig. 4.8d) the uncertainty can be as large as 30% but only 5% during quiet times (Fig. 4.8b).

7 Conclusion

The combination of Polynomial Chaos (PC) expansion with Gaussian Mixture Models (GMMs) results in a framework than can efficiently capture the evolution of an initially Gaussian distribution into a highly non-Gaussian distribution through a non-linear transformation. This worked shows a Dynamic Data-Driven Applications Systems (DDDAS) approach that can update UQ estimates based on observed data of changing Solar conditions. In particular, F10.7, Ap-index (geomagnetic activity), and Solar wind parameters from observational data can be used to develop a pdf of expected atmospheric drag. Using an initial GMM reduces the domain covered by the PC and thus, lower order polynomials can be used to get accurate results. Increasing the order of the polynomials increases the computational load in an exponential manner, while increasing the number of elements may result in a near linear increase in the computational load. Increasing the polynomial order only marginally increases the accuracy after a certain order.

This work applies the GMM-PC approach to the orbital Uncertainty Quantification (UQ) problem. It was shown that the GMM-PC approach outperformed the PC approach for the cases considered of Solar conditions. Additionally, the PC approach was applied to physics-based atmospheric models. It was shown that the uncertainty in atmospheric density models have a complex behavior across geographic locations. The test cases shown in this work highlight the fact the Solar conditions can drastically effect the model accuracy. The test cases showed that during Solar storm conditions the uncertainty can be as large as 30% but only 5% during quiet times. This work provides initial results of the GMM-PC applied to orbital propagation of uncertainty and the PC approach applied to atmospheric density.