Extending bluff-and-fix estimates for polynomial chaos expansions

https://doi.org/10.1016/j.jocs.2020.101287Get rights and content

Highlights

  • An iterative polynomial chaos algorithm is applied to approximate the solution to a nonlinear PDE with stochastic parameters.

  • The solution to a smaller polynomial chaos M0 system is iteratively used to determine the solution to corresponding M systems for a consecutive range of M>M0.

  • The algorithm accommodates choice around which component variables in a PDE system are higher priority to estimate accurately, allowing for efficient performance when the objective is to evaluate a function of the PDE solution that does not necessarily weigh the importance of each variable equally.

Abstract

Polynomial chaos methods, which are part of a broader class known as stochastic Galerkin schemes, can be used to approximate the solution to a PDE with uncertainties represented by stochastic inputs or parameters. The stochastic solution is expressed as an infinite polynomial expansion truncated to M+1 terms. The approach is then to derive a resulting system of coupled, deterministic PDEs and solve this system with standard numerical techniques. Some challenges with conventional numeric techniques applied in this context are as follows: (1) the solution to a polynomial chaos M system cannot easily reuse an already existing computer solution to an M0 system for some M0<M, and (2) there is no flexibility around choosing which variables in an M system are more important or advantageous to estimate accurately. This latter point is especially relevant when, rather than focusing on the PDE solution itself, the objective is to approximate some function of the PDE solution that weights the solution variables with relative levels of importance. In Lyman and Iaccarino (2020) [1], we first present a promising iterative strategy (bluff-and-fix) to address challenge (1); we find that numerical estimates of the accuracy and efficiency demonstrate that bluff-and-fix can be more effective than using monolithic methods to solve a whole M system directly. This paper is an extended version of Lyman and Iaccarino (2020) [1] that showcases how bluff-and-fix successfully addresses challenge (2) as well by allowing for choice in which variables are approximated more accurately, in particular when estimating statistical properties such as the mean and variance of an M system solution.

Introduction

Uncertainty quantification (UQ) in physical models, including those governed by systems of partial differential equations, is important for building confidence in the resulting predictions. Uncertainties can originate from imperfect knowledge of boundary or initial conditions in the system of interest or variability in the operating conditions. A common approach is to represent the sources of uncertainty as stochastic variables; in this context the solution to the original differential equation becomes a random quantity. Stochastic Galerkin schemes (SGS) are used to approximate the solution to parametrized differential equations [2], [3], [4]. In particular, they utilize a functional basis in the parameter space to express the solution and then derive and solve a deterministic system of PDEs with standard numerical techniques [5], [6]. A Galerkin method projects the randomness in a solution onto a finite-dimensional basis, making deterministic computations possible. SGS are part of a broader class known as spectral methods [7], [8], [9].

The most common UQ strategies involve Monte Carlo (MC) algorithms, which suffer from a slow convergence rate proportional to the inverse square root of the number of samples [10]. If each sample evaluation is expensive — as is often the case for the solution of a PDE — this slow convergence can make obtaining tens of thousands of evaluations computationally infeasible [8]. Initial spectral method applications to UQ problems showed orders-of-magnitude reductions in the cost needed to estimate statistics with comparable accuracy [11].

In the conventional approach for SGS, the unknown quantities are expressed as infinite series of orthogonal polynomials in the space of the random input variables. This representation has its roots in the work of Wiener [12], who expressed a Gaussian process as a countably infinite series of Hermite polynomials. Ghanem and Spanos [7] truncated Wiener’s representation and used the resulting finite series as a key ingredient in a stochastic finite element method. SGS based on polynomial expansions are often referred to as polynomial chaos approaches. In contrast to sampling methods (e.g. Monte Carlo simulations), polynomial chaos approaches are intrusive methods i.e. they require the solution of a mathematical problem that is different from the one originally considered. Uncertain quantities are represented as expansions that separate deterministic coefficients from a chosen random orthogonal basis [13].

Let D=[0,1]×[0,T] be a subset of the spatial and time domain Rx×Rt0.1 Then let u:DR be continuous and differential in its space and time components; further, let uL2(D). This u represents the solution to a differential equation, F(u,x,t)=0.Here F is a general differential operator that contains both spatial and temporal derivatives. Often F is assumed to be nonlinear.

Let ξ be a zero mean, square-integrable, real random variable. We assume uncertainty is present in the initial condition u(,0) and represent it by setting u(x,0;ξ):RxRu(x,0;ξ)=f(x,ξ)where f is a known function of x and ξ. Accordingly, the solution u(x,t;ξ) to F(u,x,t) is now a random variable indexed by (x,t)D, meaning u(x,t;ξ) is a stochastic process.

As statistics of ξ, we require that both u(x,t;ξ) and f(x,ξ) have existing second moments. Note that these are the only restrictions; namely, even though f is chosen as sinusoidal in the example of Section 2, we do not require f to be periodic, bounded over the real line, zero on the whole boundary D, etc.

We consider a polynomial chaos expansion (PCE) and separate the deterministic and random components of u by writing u(x,t;ξ)=k=0uk(x,t)Ψk(ξ).The uk:DR are deterministic coefficients, while the Ψk are orthogonal polynomials with respect to the measure dξ induced by ξ. Let , denote the inner product mapping (ϕ,ψ)ϕ(ξ)ψ(ξ)dξ. The triple-product notation ϕψφ is understood as ϕ,ψφ=ϕψ,φ=ϕ(ξ)ψ(ξ)φ(ξ)dξ and the singleton ϕ as ϕ(ξ)dξ. Then we require the Ψk to satisfy the properties [5] Ψk=Eξ(Ψk)=δk0ΨiΨj=ciδijwhere the ciR are nonzero and δij is the Kronecker delta. Note that the Hermite Ψk are necessarily independent as random variables in ξ from their orthogonality.

Let SM(x,t;ξ)=k=0Muk(x,t)Ψk(ξ) be a stochastic process indexed by (x,t)D. By the Cameron–Martin theorem2 the truncated PCE denoted by SM converges in mean square,3 SM(x,t;ξ)ML2u(x,t;ξ),at every fixed (x,t) in the domain D. This justifies the PCE and its truncation to a finite number of terms for the sake of computation.

Stochastic properties of the solution u can be readily computed [8], [15], [17], [18] given the attributes in (2). Namely, E(x,t)Eξu(x,t;ξ)=k=0uk(x,t)EξΨk(ξ)=u0(x,t).Going forward, the symbol E denotes a deterministic C2(D) function in x and t in addition to the usual expectation operator.

Similarly, the variance of the solution is V(x,t)Vξu(x,t;ξ)=k=1uk(x,t)2ΨkΨk=k=1ckuk(x,t)2.Note that variance is defined, because we assume u(x,t;ξ) has existing second moments over all (x,t) indexes. Using the truncated u expansion SM, we can approximate the solution’s true variance Vξu(x,t;ξ) as4 VM(x,t)VξSM(x,t;ξ)=Vξk=0Muk(x,t)Ψk(ξ)V(u(x,t;ξ)).As mentioned in the abstract, this paper is an extended version of [1] that considers a PDE system obtained by explicitly representing the PDEs in terms of VM(x,t) and E(x,t). In Section 3.4, our objective will not be to compute SM but instead to estimate E and VM. The beauty is that, unlike the case of approximating u by SM, not all of the coefficients functions uk(x,t) need to be determined perfectly in order to approximate E and VM accurately. In this sense, the algorithm we introduce (bluff-and-fix) is preferable to conventional methods that do not have the flexibility to choose which uk(x,t) are more important to estimate well.

Substituting the truncation SM into Eq. (1), we have Fk=0Muk(x,t)Ψk(ξ),x,t=0.Furthermore, we can determine the initial conditions for the deterministic component functions. Multiplying u(x,0;ξ) by any Ψk and integrating with respect to the ξ-measure dξ yields i=0ui(x,0)Ψi(ξ)Ψk(ξ)dξ=f(x,ξ)Ψk(ξ)dξ=Eξ[f(x,ξ)Ψk(ξ)]uk(x,0)=1ckEξ[f(x,ξ)Ψk(ξ)]. The scalars ck of course are dependent on the choice of the orthogonal Ψk polynomials. Consequently, the initial conditions for E(x,t) and VM(x,t) are E(x,0)=u0(x,0)=1c0Eξf(x,ξ)Ψ0(ξ),VM(x,0)=k=1Mckuk(x,0)2=k=1M1ckEξ[f(x,ξ)Ψk(ξ)]2. In a similar manner, we can “integrate away” the randomness in Eq. (7) by projecting onto each basis polynomial. This is discussed in detail in the next section.

Section snippets

Inviscid Burgers’ Equation

Our choice of orthogonal polynomials Ψk relies on the distribution of the ξ random variable. Throughout this paper, we will choose ξN(0,1) and the Ψk to be Hermite polynomials; however, many of the results apply almost identically to other distributions and their corresponding polynomials, with some caveats around convergence [15], [20].

Note that Hermite polynomials satisfy ΨkΨj=(k!)δkj and by [21], ΨiΨjΨk=0 if i+j+k is odd or max(i,j,k)>si!j!k!(si)!(sj)!(sk)! elsewhere s=(i+j+k)2. Now

Bluff-and-fix (BNF) Algorithm

Recall that we are solving inviscid Burgers’ equation with the uncertain initial condition u(x,0;ξ)=ξsin(x) for ξN(0,1), as given by Eq. (10). In Sections 3.3 Iterative bluff-and-fix, 3.1 Solving the, 3.2 One step bluff-and-fix, the goal is to solve the M system in Eq. (11) for the coefficient functions u0,,uM numerically, therefore giving an approximation of u by the partial summation SM=k=0Muk(x,t)Ψk(ξ). In Section 3.4, the objective instead is to estimate the mean and variance of the

Numerical Results

We report solutions for Burgers’ equation with an uncertain initial condition, namely u(x,0;ξ)=ξsin(x) for ξN(0,1). The equation is solved for x[0,3] on a uniform grid with Δx=0.05. Time integration is based on the Runge–Kutta 4-step (RK4) scheme with t[0,0.25] and Δt=0.001.

Throughout this discussion, we will define error as the deviation of the solution approximation û(M) produced by bluff-and-fix from the solution yielded from solving the full M system via RK4. That is, our computations

Conclusion

Polynomial chaos (PC) methods are effective for incorporating and quantifying uncertainties in problems governed by partial differential equations. In this paper, which is an extended version of [1], we present a promising algorithm (one step bluff-and-fix) for utilizing the solution to a polynomial chaos M1 system arising from inviscid Burgers’ equation to approximate the solution to the corresponding M system. Bluff-and-fix is considered in the context of inviscid Burgers’ equation to

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References (22)

  • XiuD. et al.

    The Wiener–Askey polynomial chaos for stochastic differential equations

    SIAM J. Sci. Comput.

    (2002)
  • Cited by (2)

    View full text