Skip to content
Publicly Available Published by De Gruyter January 29, 2021

Monte carlo estimation of the solution of fractional partial differential equations

  • Vassili Kolokoltsov , Feng Lin EMAIL logo and Aleksandar Mijatović

Abstract

The paper is devoted to the numerical solutions of fractional PDEs based on its probabilistic interpretation, that is, we construct approximate solutions via certain Monte Carlo simulations. The main results represent the upper bound of errors between the exact solution and the Monte Carlo approximation, the estimate of the fluctuation via the appropriate central limit theorem (CLT) and the construction of confidence intervals. Moreover, we provide rates of convergence in the CLT via Berry-Esseen type bounds. Concrete numerical computations and illustrations are included.

1 Introduction

The study of fractional partial differential equations (FPDEs) is a very popular topic of modern research due to their ubiquitous application in natural sciences. In particular, there is an immense amount of literature devoted to numerical solution of FPDEs. However most of them exploit the various kinds of deterministic algorithms (lattice approximation, finite element methods, etc), see e.g. [1, 2, 3, 17] and numerous references therein. However, there are only few papers based on probabilistic methods. For instance, [16] exploits the CTRW (continuous time random walk) approximation for solutions to FPDEs, and [12] is based on the exact probabilistic representation.

CTRW approximation to the solutions of FPDEs was developed by physicists more than half a century ago and it became one of the basic stimulus to the modern development of fractional calculus. Exact probabilistic representation appeared a bit later first for fractional equations and then for generalized fractional (e.g. mixed fractional), see e.g. [10, 11, 8, 13] for various versions of this representation. There are now many books with detailed presentation of the basics of fractional calculus, see e.g. [9, 13, 7].

The paper is devoted to the numerical solutions of fractional PDEs based on its probabilistic representation with the main new point being the detailed discussion of the convergence rates. Namely, the main results represent the upper bound of errors between the exact solution and the Monte Carlo approximation, the estimate of the fluctuation via the appropriate central limit theorem and the construction of confidence intervals. Concrete numerical computations and illustrations are included.

We denote C (ℝd) := {f : ℝd → ℝ is continuous and vanishes at infinity}. Let gC(ℝd), consider the problem

tDa+Axut,x=gx,t,xa,b×Rd,ua,x=ϕx,xRd,(1.1)

where Ax is a generator of a Feller semigroup on C(ℝd) acting on x, ϕDom(Ax), the operator −tDa is a genetalised differential operator of Caputo type of order less than 1 acting on the time variable t ∈ [a, b].

The solution uC((−∞, b] × ℝd) of the problem (1.1) exsits and is given by [4]. u has the stochasitc representation (see [4] Equation (4) and Theorem 4.20),

ut,x=E[ϕXTtx+0TtgXsxds],(1.2)

where {Xsx}s0 is the stochastic process started at x ∈ ℝd generated by Ax. Let {Ysa,t}s0 be the decreasing [a, b]-valued stochastic process started at t ∈ [a, b] generated by −tDa, Tt = inf{s>0,Ysa,t<a}. In this paper, we assume {Ysa,t} is a decreasing α-stable Levy process started at t, i.e. Ysa,t=dts1/αη, where η is a random variable with α-stable distribution whose Laplace transform is 𝔼[e] = ezα/cos(πα/2)( and we denote this by ηSα(1,1,0)).

Remark 1.1

Given a Levy measure ν on ℝ+ satisfying

0min{1,r}ν(dr)<,

the operator −tDa is defined by

tDaf(s):=0sa(f(sr)f(s))ν(dr)(f(a)f(s))saν(dr),t(a,b].

When {Xsx}s0 is Brownian motion, then Ax would be 12Δ, where Δ = i=1dxi2. If {Ysa,t} is the deterministic drift, i.e. tDa=ddt and g = 0, then (1.1) becomes

12Δut,x=ddtut,x,(1.3)

the heat equation that we are more familiar with.

We assume {Xsx}s0 is isotropic β-stable.(What ‘isotropic’ means is explained in Section 2, after Lemma 2.1) In this paper we shall investigate some properties of the representation (1.2) and its Monte-Carlo estimator, i.e.

uNht,x=1Nk=1NϕXTtkx,k+i=1Ttk/hhgXtikx,k,(1.4)

where h > 0 is the step length, Ttk are iid samples of Tt, and tik = (i − 1)h. Note that we can sample the stopping time Tt (see Lemma 2.1 below), then sample the isotropic β-stable process {Xsx} and finally simulate the estimator (1.4).

In Section 2 we mainly focus on the situation when g = 0, i.e. the estimator now is

uNt,x=1Nk=1NϕXTtkx,k.(1.5)

To make central limit theorem and Berry-Esseen bound hold, we only need to estimate the tail of the stable process at some stopping time, i.e. P[|XTtx|>s] for large s. And we begin with showing that the order of the tail of multidimentional stable distribution has the same order of the tail of each component of itself. In Section 3 we study the property of the Monte-Carlo estimator when the forcing term g ≠ 0. We estimate the upper bound of the second moment of the estimator and then, the L2 error between the estimator and the solution. Besides, we use there properties to show that the central limit theorem holds using the triangular arrays. In Section 4 we give numerical examples, demonstrating the performance of our simulation algorithm.

2 Properties of the estimator when the forcing term g = 0

In this paper, for function f, g : ℝd → ℝ, we use the notation f(x) = O(g(x)), meaning that |fxgx| is bounded as ∣x∣ → ∞. Also we use the notation f(x) ∼ g(x), meaning that both |f(x)g(x)| and |g(x)f(x)| are bounded as ∣x∣ → ∞.

In this section, we study the situation when g(x) = 0 for all x ∈ ℝd, then the stochastic representation (1.2) becomes

ut,x=E[ϕ(XTtx)](2.1)

and the estimator now is defined in (1.5).

Our main results tell us how close uN(t, x) and u(t, x) are, namely:

Theorem 2.1

  1. For all continuous functionϕ : ℝd → ℝ,

    uNt,xa.s.ut,x,asN.(2.2)
  2. Let SN(t, x) = N (uN(t, x) − u(t, x))/σ(t, x) andWbe the standard normal distribution. Ifϕ(x) satisfiesϕx=O(|x|β2+δ),whereδ > 0, then the central limit theorem holds, i.e. for all bounded uniformly continuous funtionψ,

    EψSNt,xEψWasN.
  3. LetYt,x:=ϕ(Ttx)E[ϕ(XTtx)],denote 𝔼[Y(t, x)2] = σ(t, x)2, 𝔼[∣Y(t, x)∣3] = ρ(t, x). Ifϕ(x) satisfiesϕx=O(|x|β3+δ),whereδ > 0, then for allC3functionsψ : ℝ → ℝ,

    |EψSNt,xEψW|0.433||ψ||ρt,xNσt,x3.

    HereC3means the space of functions with bounded third derivatives.

In other words, the central limit theorem can be written using convergence in distribution:

NuNt,xut,xdN0,σt,x2 as N.

Since the estimator is unbiased, Theorem 2.1(i) holds because of the strong law of large numbers. For (ii), it is the standard central limit theorem and we only need to show that EϕXTtx2<. For (iii), it is a version of the Berry-Esseen bound and we need to show that E[|ϕ(XTtx)|3]<. These facts are evident if ϕ(x) is bounded. To deal with unbounded ϕ(x), let us recall the following fact: for any random variable U,

E[U2]=0P[U2>t]dt.(2.3)

It is finite if ℙ[∣U∣ > t] = O(t−(2+δ)), where δ is a positive constant. Now let us look back at our problems. Once we know the tail of XTtx and the growth rate of ϕ(x), the tail of ϕ(XTtx) would be clear as well as the finiteness of the moments of ϕ(XTtx).

Luckily, we have the following result:

Proposition 2.1

Assume that {Xs}s⩾0is aβstable process, thenP[|XTtx|>u]=O(uβ).

To prove Proposition 2.1, we need a little lemma telling us that the distribution of Tt is analytically accessible:

Lemma 2.1

Denoteā := ta, thenTt=da¯ηαwhereηSα(1,1,0).

Proof

Note that Ysa,t=dts1/αη.{Tt>s}={Ysa,t>a}, since τ has monotone paths. Hence

PTt>s=Pts1/αη>a=Ps1αη<a¯=Ps<a¯/ηα

Together with the facts that Xsx is β stable and Lemma 2.1, we have

XTtxx=dTt1βX1=da¯ηαβX1.(2.4)

Also we need Lemma 2.2 and Lemma 2.3 given below. Before that let us explain what ‘isotropic’ means in our assumption of {Xs}s⩾0.

For d-dim β-stable random variable U = (U(1), …, U(d)) on ℝd, there are a finte measure λ on sphere S and γ in ℝd such that the characteristic function of U satisfies

U^z:=E[eiz,U]=expS|z,ξ|β1itanπβ2sgnz,ξλdξ+iγ,z,

for β ≠ 1 and vice versa. Hence each component of U is 1-dim stable random variable and the stability index is still β. Besides, for 1-dim β-stable random variable V whose characteristic function has form

V^(z)=E[eiVz]=exp(σβ|z|(1iρ(signz)tan(πβ/2)+iμz),

we use the notation VSβ(σ,ρ,μ). We say a d-dim stable random variable U is isotropic if its coordinates have the same distribution, i.e. U(i)Sβ(σ,ρ,μ) i = 1, …, d. We say a process {Xs}s⩾0 is isotropic stable if X1 is an isotropic stable random variable.

Lemma 2.2

LetU = (U(1), …, U(d)) be an isotropicd-dimβ-stable random variable, andU(i)Sβ(σ,ρ,μ), then ℙ[∣U∣ > s] ∼ sβass → ∞.

Proof

Since {|U|=U12++Ud2>s}{|U1|>s}, we have

P[|U|>s]P[|U1|>s].

Since {|U|>s}{max1id|Ui|>s/d}i=1d{|Ui|>s/d}, we have

P[|U|>s]i=1dP[Ui>s/d].

Now recall the well known result of the tail of 1-dim stable random variable: if VSβ(σ,ρ,μ), then

limssβP|V|>s=Cβσβ,(2.5)

where Cβ=(0xβsinxdx)1=1βΓ2βcosπβ/2 (see [15], Property 1.2.15).

Hence for any ϵ > 0, there exists some M, such that for all s > M and i = 1, …, d,

(Cβσβϵ)sβP[|U(i)|>s]Cβσβ+ϵsβ.

Hence for s>dM,

P[|U|>s]i=1dP[|U(i)|>s/d]d1+β/2ϵ+Cβσβsβ.(2.6)

Therefore ℙ[∣X∣ > s] ∼ sβ as s → ∞.□

Lemma 2.3

LetU, Vbe positive random variables such that

limttαP[U>t]C1,limttαP[V>t]C2,

whereC1 > C2, then

P[UV>t]=Otαfort.

Proof

Given a positive number M, there exsits T and ϵ > 0, such that for all t > T,

P![UV>t]P[U>M+1t]P[V>Mt]C1+ϵM+1αtαC2ϵMαtα1MαC1+ϵMM+1αC2ϵtα.

If we pick M big enough, we have ℙ[UV > t] ⩾ Ctα for some constant C. On the other hand, for large t,

P[UV>t]=V>0P[Uv>t]P[Vdv]V>0P[U>t]P[Vdv]C1tαP[Vdv]C1tα.

Therefore ℙ[UV > t] ∼ tα as s → ∞.□

Lemma 2.2 tells us the order of tail of high dimensional stable process. Lemma 2.3 shows the order of the difference between certain random variables and we can apply it to the logarithm of (2.4), i.e. log∣X1∣ + αβ logāαβ logτ1.

Now we can come back to the proof of Proposition 2.1.

Proof

Proof of Proposition 2.1. Now let us estimate the tail of XTtx. For large u > 0,

P[|XTtx|>u]=P|a¯ηαβX1+x|>uPa¯ηαβ|X1|>u|x|=Plog|X1|αβlogη>logu|x|αβloga¯=P[AB>r,A>0,B>0]+P[AB>r,A>0,B<0]+P[AB>r,A<0,B<0],(2.7)

where A := log∣X1∣, B := αβ log(η), r := log(u − ∣x∣) − αβ logā. (Note that for large u we have r > 0).

Let X1 = (X(1), …, X(d)) and X(i)Sβ(σ,ρ,μ), i = 1, …, d. By the Proof of Lemma 2.2, for any ϵ > 0, there exists some M, such that for all s > M and i = 1, …, d,

P[|X(i)|>s]Cβσβ+ϵsβ.

Hence for s>dM,

P[|X1|>s]i=1dP[|X(i)|>s/d]d1+β/2ϵ+Cβσβsβ,

and for t>log(dM),

P[log|X1|>t]=P[|X1|>et]d1+β/2ϵ+Cβσβeβt.

Now let us discuss (2.7) in three conditions. For r>log(dM):

  1. When A > 0, B > 0, we have

    P[AB>r,A>0,B>0]P[A>r]=P[|X1|>es]d1+β/2ϵ+Cβσβeβr.(2.8)
  2. When A > 0, B < 0, pick integer k = ⌊r/S⌋, and we divide the event {A + (−B) > r} into k parts:

    {A+B>r}=i=1k1{A+B>r,Bi1kr,ikr}{A+B>r,B>k1kr}i=1k1{A>kikr,Bi1kr,ikr}{B>k1kr}i=1k{A>kikr,B>i1kr}.(2.9)

    Hence,

    P[A+B>r,A>0,B<0]i=1kP[A>kikr,B>i1kr]=i=1kP[A>kikr]P[B>i1kr].(2.10)

    Recall that

    P[log|X1|>kikr]d1+β/2ϵ+Cβσβekikβr.(2.11)

    Using the result (3.7) that we shall mention later, we have

    E[η2α]=(cos(πα2))22Γ1+2α.(2.12)

    By the Markov inequality,

    P[αβlog(η1)i1kr]=P[η1>eβαi1kr]E[τ12α](eβαi1kr)2α2e2i1kβr.(2.13)

    Combining (2.10), (2.11) and (2.13), we have

    P[A+B>r,A>0,B<0]i=1kd1+β/2ϵ+Cβσβekikβr2e2i1kβr=2d1+β/2ϵ+Cβσβi=1kei2kβreβr2d1+β/2ϵ+Cβσβeβr/k1eβr/keβr2d1+β/2ϵ+Cβσβe2βSeβS1eβr.(2.14)
  3. When A < 0, B < 0, then

    P[AB>r,A<0,B<0]P[A<0,B>r]P[B>r]=P[αβlog(η1)>r]=P[η1>eβαr]E[ηα]eβαrαeβr.(2.15)

    Combining the three conditions above, we know that for large u,

    P[|XTtx|>u]1+2e2βSeβS1d1+β/2ϵ+Cβσβ+1eβr=Oeβ(logu|x|αβloga¯)=Ouβ.(2.16)

    Now let us finish the proof of our main result. First let us see this for Theorem 2.1(ii).

Proof

Proof of Theorem 2.1

P[ϕ(XTtx)>u]=Ou2+δ,(2.17)

and by (2.3)E[(ϕ(XTtx))2] is finite.□

For the proof of Theorem 2.1(iii), E[|ϕ(XTtx)|3] is finite because of the similar argument. For the rest of proof, see [14], page 356, Variant Berry-Esseen Theorem.

Remark 2.1

  1. If Cα < Cβσβ, by Lemma 2.3, we have

    P[AB>r]P[AB>r,A>0,B>0]Ctβ,(2.18)

    where C is a constant that can be chosen from the proof of Lemma 2.3. This result means the order tβ is the best one.

  2. In the proof of Proposition 2.1 we need r = log(u − ∣x∣) − αβ logā and r>log(dM). Hence there exists some constant M0 such that for u > M0, (2.16) holds and M0 has order d1/2.

Besides, we can roughly give the upper bound of E[ϕ(XTtx)2].

Example 2.1

If ϕ(x) satisfies ϕx|x|βδ+2, where δ > 0, then from Remark 2.1 we know that there exists some M0 such that for all t > M0,

P[|XTtx|>t]1+e2βSeβS1d1+β/2ϵ+Cβσβ+1eβr=M1eβr=M1eβlogt|x|αβloga¯M2tβ,(2.19)

where M1=1+e2βSeβS1d1+β/2(ϵ+Cβσβ)+1,M2=2a¯αβM1. Hence,

E[ϕXTtx2]=0P[ϕXTtx2>t]dtM0+M0P[|XTtx|>t2+δβ]dtM0+M2M0t2+δββdt=M0+2M2M0δ/2/δ.(2.20)

Note that M0 has order d1/2 and M(2) has order d1+β/2, This upper bound has order d1+β/2.

3 Properties of the estimator when g is not 0

In this section we want to clarify the Monte-Carlo estimator of the stochastic representation in Section 1. Here we assume that g satisfies the condition ∣g(x) − g(y)∣ ⩽ Lxyγ, where |x|γ=i=1d|x(i)|γ,x(i) is the coordinate of x, 0 < γ < β/2.

Our main results in this section are as follows.

Theorem 3.1

Assume|ϕ(x)|=O(|x|β2+δ)forx∣ → ∞, where δ > 0.

  1. E[uNht,xut,x2]0asN → ∞, h → 0.

  2. (CLT with a bias correction) LethN=N2βγ,ut,x=EZt,xwhere

    Z(t,x)=ϕ(XTtx)+0TtgXsxds,

    andWbe the standard normal distribution, then for all bounded uniformly continuous functionψ,

    E[ψ(NuNhNt,xut,x/VarZt,x)]E[ψW]asN.

Let

Yht,x=ϕ(XTtx)+i=1Tt/hhg(Xtix)

be the approximation of Z(t, x). And let

uNht,x=1Nk=1NYhk(t,x),

where Yhk(t,x)=ϕXTtkx,k+i=1Ttk/hhgXtikx,k,k=1,,N.Yhk(t,x) are the iid copies of Yh(t, x). Note that for random variable U, let V be its approximation and Vk, k = 1, …, N be the iid copies of V. The L2 error satisfies

EEU1Nk=1NVk2=1NvarV+EUEV2.(3.1)

Therefore, to estimate the L2 error 𝔼[(u(t, x) − uN(t, x))2], we only need to study varYh(t, x) and 𝔼Z(t, x)-𝔼Yh(t, x), and the following propositions answer these questions.

Proposition 3.1

There exists a constantMt,x1(depending ont, x) such thatVarYh(t, x) ⩽ Mt,x1.

Proposition 3.2

There exists a constantMt,x2(depending ont, x) such thatE[|Zt,xYht,x|]Mt,x2hγβ.

Proposition 3.3

There exists a constantMt,x3(depending ont, x) such thatE[|Zt,xYht,x|2]Mt,x3h2γβ.

Sections 3.1 and 3.2 give proofs of these propositions. Section 3.3 is the proof of our CLT.

Remark 3.1

  • Non-asymptotic confidence interval: Combining (3.1), Proposition 3.1 and Proposition 3.2 we have

    E[(ut,xuNht,x)2]1NMt,x1+(Mt,x2)2h2γβ,(3.2)

    where u(t, x) is the solution of problem (1.1). Now we can construct the confidence interval using the Markov inequality:

    P[|ut,xuNht,x|>r]E[(ut,xuNht,x)2]/r21r21NMt,x1+(Mt,x2)2h2γβ.(3.3)

    Hence we can pick suitable N and h such that P[|ut,xuNht,x|>r] < 1 − ϵ for some small ϵ.

  • Asymptotic confidence interval: We can use CLT in Theorem 3.1 to get the asymptotic confidence interval. In other words, the central limit theorem can be written using convergence in distribution:

    NuNhNt,xut,xdN0,VarZt,xas n.(3.4)

    Once we have the upper bound M(t, x) of VarZt,x (e.g. see Example 2.1), it is easy to see that it yields a 100(1 − α)% asymptomic confidence interval uNhN±Mt,xNzα/2 for u(t, x), where z(t) satisfies Φ(z(t)) = 1 − t and Φ is the distribution function of the standard normal distribution. See Section 4.3 for a simple example.

Before the calculation, we need the following results (see [6], page 162):

  1. For constants c > 0, η ∈ (−1,β) and a symmetric β-stable 1-dim process Ut with 𝔼[eizUt}] = etczβ, we have

    E[|Ut|η]=tcη/β2ηΓ1+η2Γ1ηβπΓ1η2.(3.5)

    Recall that each component of X1X0, denoted by X(j), is symmetric with 𝔼[eizX(j)] = eczβ and c > 0.

  2. If 0 < α < 1 and {Xt} is a stable subordinator with 𝔼[euXt] = etcuα, where c′ is some constant, then for −∞ < η < α,

    E[Xtη]=tcη/αΓ(1ηα)Γ1η.(3.6)

    Since ηSα(1,1,0), we have E[euη]=exp{1cosπα2uα} (see [15], Proposition 1.2.12). Hence,

    E[ηη]=1cosπα2η/αΓ(1ηα)Γ1η.(3.7)

3.1 Estimation of variance of the approximation

In this section we estimate Var(Yh).

Denote ⌊Tt/h⌋ by n. Note that the variance does not change when added some constant, and denote g(Xti) − g(X0) = gi. We have

VarYh(t,x)=Varϕ(XTtx)+hi=1ng(Xtix)=Varϕ(XTtx)+hi=1ng(Xtix)gX0Eϕ(XTtx)+hi=1ngi2E[ϕ(XTtx)2]+h2E[i=1ngi2]+2hE[ϕ(XTtx)i=1ngi].(3.8)

Denote the upper bound of 𝔼[ϕ(XTt)2] by M1. Next,

Egi2|TtL2E[|XtiX0|γ2|Tt]=L2E[j=1d|Xti,(j)X0,(j)|γ2|Tt]dL2E[j=1d|Xti,(j)X0,(j)|2γ|Tt]=dL2ti2γβE[j=1d|X1,(j)X0,(j)|2γ].(3.9)

Using the result of (3.5), for j = 1, …, d, we have

E[|X1,(j)X0,(j)|2γ]=c2γ/β22γΓ1+2γ2Γ12γβπΓ1γ.(3.10)

Let us denote

M2:=j=1dE[|X1,(j)X0,(j)|2γ]=dc2γ/β22γΓ1+2γ2Γ12γβπΓ1γ.(3.11)

Then (3.9) becomes

E[gi2|Tt]dL2ti2γβM2,(3.12)

and for ij,

Egigj|Tt(E[gi2|Tt]E[gj2]|Tt)12dL2tiγβtjγβM2.(3.13)

Therefore,

E[i=1ngi2|Tt]i=1ndL2ti2γβM2+ij2dL2tiγβtjγβM2=dL2M2i=1ntiγβ2.(3.14)

Recall that ti = ih and n = ⌊Tt/h⌋, hence

i=1ntiγβ=hγβi=1niγβhγβ0n+1xγβdx=11+γβhγβn+11+γβ11+γβhγβTt/h+11+γβ.(3.15)

Hence

h2E[i=1ngi2]=h2E[E[i=1ngi2|Tt]]dL2M21+αβ2E[Tt+121+γβ].(3.16)

Note that

E[Tt+121+γβ]E[Tt+13]=E[Tt3]+3E[Tt2]+3E[Tt]+1.(3.17)

And by (3.7), we know that for k = 1,2,3,

E[Ttk]=a¯kαE[ηkα]=a¯kαcosπα2kΓ1+kΓ1+kα,(3.18)

implying the upper bound of h2E[i=1ngi2]. By the Cauchy-Schwarz inequality,

hE[ϕXTti=1ngi]h2E[i=1ngi2]E[ϕXTt2]12(3.19)

and hence we get the upper bound of VarYh(t, x) using (3.8):

VarYh(t,x)E[ϕXTt2]+dL2M21+αβ2E[Tt+121+γβ]++E[ϕXTt2]dL2M21+αβ2E[Tt+121+γβ]12.(3.20)

Remark 3.2

By Example 2.1, we know the upper bound of 𝔼[ϕ(XTt)2] has order d1+β2. By (3.11), M2 has order d. By (3.16), the upper bound of h2E[i=1ngi2] has order d2. Hence the upper bound of VarYh has order d2.

3.2 Estimation of EZ-EY

Similarly, we begin with the estimation the conditional expectation.

Conditioning on Tt, we write

E[Z(t,x)]E[Yh(t,x)|Tt]=EϕXTt+0TtgXsdsϕXTt+i=1Tt/hhgXti|Tt=E0TtgXsdsi=1Tt/hhgXti|Tt=Ei=1Tt/htiti+hgXsgXtids+Tt/hhTtgXsds|Tt.(3.21)

We have

E[|i=1Tt/htiti+hgXsgXtids||Tt]E[i=1Tt/htiti+hL|XsXti|γds|Tt]stationarity of increments=E[i=1Tt/h0hL|XsX0|γds|Tt]=E[i=1Tt/h0hLj=1d|Xs,(j)X0,(j)|γds|Tt]Xt is βstable=E[Tt/hL0hsγβj=1d|X1,(j)X0,(j)|γds|Tt]=C0Tt/hh1+γβC0Tthγβ,(3.22)

where C0=11+γβLj=1dE|X1,(j)X0,(j)|γ.

From (3.5), we know that

E[|X1,(j)X0,(j)|γ]=cγβ2γΓ1+γ2Γ1γβπΓ(1γ2).(3.23)

Next, it is clear that

gXsgXTt+|gXTtgXs|,(3.24)

Thus

EgXsEgXTt+|gXTtgXs|E[gXTt+L|XTtXs|γ]=E[gXTt+LTtsγβ|X1X0|],(3.25)

and

E[Tt/hhTtgXsds|Tt]E[Tt/hhTtE[gXTt+ELTtsγβ|X1X0|γ]ds|Tt]E[hE[gXTt+11+γβLh1+γβE|X1X0|γ]|Tt].(3.26)

We have showed that if gx=Oxβ1+δ, where δ > 0, then

E[gXTt]<M3<,

with some M3.

Therefore, by (3.26),

E[E[Tt/hhTtgXsds|Tt]]M2h+11+γβLE|X1X0|γh1+γβ.(3.27)

Combining (3.21), (3.22) and (3.27) we have

|E[ZY]|=E[E[ZY|Tt]]M3h+Ldcγβ2γΓ1+γ2Γ1γβ1+γβπΓ(1γ2)h1+γβ+Ldcγβ2γΓ1+γ2Γ1γβ1+γβπΓ(1γ2)hγβa¯αΓ2Γ1+α=Ohγβ.(3.28)

Remark 3.3

  1. With similar argument, we can see 𝔼[Z2] < ∞:

    Since E[ϕ(XTtx)2]<, we only need E0TtgXsds2<.

    Like (3.21), we have

    E[0TtgXsds2]E[0Tt|gXs|ds2]E[0Tt|gX0|+|gX0gXs|ds2]=E[E[0Tt|gX0|+|gX0gXs|ds2|Tt]]2E[E[0Tt|gX0|ds2|Tt]+E[0Tt|gX0gXs|ds2|Tt]].(3.29)

    Note that

    E[E[0Tt|gX0|ds2|Tt]]=E[Tt2]|gX0|2<,(3.30)

    and

    E[0Tt|gX0gXs|ds2|Tt]E[0TtL|X0Xs|γds2|Tt]=L2E[1Tt0Tt|X0Xs|γ2ds|Tt]=L21TtE[0Ttj=1d|X0,(j)Xs,(j)|γ2ds|Tt]L2d1TtE[0Ttj=1d|X0,(j)Xs,(j)|2γds|Tt]L2d1TtE[0Ttj=1ds2γβ|X0,(j)X1,(j)|2γds|Tt]=L2d1Tt11+2γβTt1+2γβE[j=1d|X0,(j)X1,(j)|2γ].

    Hence

    E[E[0Tt|gX0gXs|ds2|Tt]]L2d11+2γβE[Tt2γβ]E[i=1d|X0,(j)X1,(j)|2γ]<.(3.31)

    Combining (3.30) and (3.31), we have

    E[0TtgXsds2]<,(3.32)

    and therefore 𝔼[Z2] < ∞.

  2. With the same condition, we can also show that 𝔼[∣YZ2] has order h2γβ using similar argument:

    E[|YZ|2]=Ei=1Tt/hi1hih(gXsg(Xi1h))ds+Tt/hhTtgXsds2.(3.33)

    And,

    E[|Tt/hTtgXsds|2|Tt]E[TtTt/hhTt/hhTtgXs2ds]E[hTt/hhTtgXTt+|gXTtgXs|2ds]E[2hTt/hTtgXTt2]+2hE[Tt/hhTtgXTtgXs2ds],(3.34)
    E[Tt/hhTtgXTtgXs2ds|Tt]L2E[Tt/hhTt|XTtXs|γ2ds|Tt]L2dE[Tt/hhTtj=1d|XTt,(j)Xs,(j)|2γds|Tt]=L2dE[Tt/hhTtTts2γβj=1d|X1,(j)X0,(j)|2γds|Tt]L2dE[j=1d|X1,(j)X0,(j)|2γ12γβ+1h1+2γβ,(3.35)
    E[i=1Tt/hi1hihgXsg(Xi1h)ds2|Tt]E[Tt/hi=1Tt/hi1hihgXsg(Xi1h)ds2|Tt]Tt/hL2i=1Tt/hE[i1hih|XsXi1h|γds2|Tt]Tt/hL2i=1Tt/hE[hi1hih|XsXi1h|γ2ds|Tt]
    =Tt/hL2i=1Tt/hE[hi1hihj=1d|Xs,(j)Xi1h,(j)|γ2ds|Tt]Tt/hL2i=1Tt/hE[hi1hihdj=1d|Xs,(j)Xi1h,(j)|2γds|Tt]=Tt/h2L2E[h0hs2γβj=1d|X1,(j)X0,(j)|2γds|Tt]Tt2L2E[j=1d|X1,(j)X0,(j)|2γ]h2γβ.

    Hence E[|YZ|2]Mt,x3h2γβ where Mt,x3 is a constant that only depends on t and x.

3.3 Proof of Central Limit Theorem with a bias correction

Recall the definition of null array. By this we mean a triangular array of random variables (ξnj), 1 ⩽ jmn, n, mn ∈ ℕ, such that the ξnj are independent for each n and satisfy

supjE[|ξnj|1]0.(3.36)

The following result is well-known (see [5], Theorem 5.15).

Theorem 3.2

Let (ξnj) be a null array of random variables, thenj=1mnξnjdNb,ciff these conditions hold:

  1. j=1mn ℙ[∣ξnj∣ > ϵ] → 0 for allϵ > 0 asn → ∞;

  2. j=1mn 𝔼[ξnj;∣ξnj∣ ⩽ 1] → basn → ∞, where 𝔼[X; A] = 𝔼[X𝕀A],

  3. j=1mnVar[ξnj;∣ξnj∣ ⩽ 1] → casn → ∞, whereVar(X; A) := Var(X𝕀A).

Again denote

Z:=ϕ(XTtx)+0TtgXsxds,Yhk:=ϕ(XTtkx)+i=1Ttk/hhgXihx,

where Ttk are independent samples of the stopping time. Now we will apply Theorem 3.2 to prove Theorem 3.1(ii).

Proof

Proof of Theorem 3.1 (ii). Let ξNj=1NYhNjEZ,j = 1, …, N, then for any ϵ > 0,

P[|ξNj|>ϵ]=P[|YhNjEZ|>Nϵ]E[|YhNjEZ|]NϵE[|YhNjE[YhNj]|]+E[|YhNjZ|]NϵvarYhNj12+E[|YhNjZ|]NϵMt,x1+Mt,x2hNγβNϵN0,(3.37)

where Mt,x1,Mt,x2 are the same as above. This implies that ξNj converges to 0 in probability uniformly in N, and therefore (ξNj) is a null array.

Denote ANj={|YhNjEZ|N}={|ξNj|1}. To apply Theorem 3.2, we only need to check that those three conditions hold:

  1. We need to prove that for any ϵ > 0,

    j=1NP[|ξNj|>ϵ]=j=1NP[|YhNjEZ|>Nϵ]0.(3.38)

    Note that

    {|YhNjEZ|>Nϵ}{|YhNjZ|>12Nϵ}{|ZEZ|>12Nϵ}.

    Hence

    P[|YhNjEZ|>Nϵ]P[|YhNjZ|>12Nϵ]+P[|ZEZ|>12Nϵ],(3.39)
    j=1NP[|YhNjEZ|>Nϵ]j=1nP[|YhNjZ|>12Nϵ]+j=1nP[|ZEZ|>12Nϵ],(3.40)
    j=1NP[|ZEZ|>12Nϵ]=j=1NE[1;|ZEZ|>12Nϵ]j=1NE[4|ZEZ|2Nϵ2;|ZEZ|>12Nϵ]=E[4|ZEZ|2ϵ2;|ZEZ|>12Nϵ]N0,(3.41)
    j=1NP[|YhNjZ|>12Nϵ]j=1NE[|YhNjZ|]/(12Nϵ)2NMt,x1hNγβ/ϵN0.(3.42)

    Together with (3.41) and (3.42) we know that (3.38) holds.

  2. We need to prove that

    j=1NE[ξNj;|ξNj|1]=1Nj=1NE[YhNjEZ;ANj]0 as N.(3.43)

    Since

    1Nj=1NE[YhNjEZ]=1Nj=1NE[YhNjZ]1Nj=1NE[|YhNjZ|]1Nj=1NMt,x2hNγβ0 as N,(3.44)

    we only need to prove

    1Nj=1NE[YhNjEZ;ANjC]0 as N.(3.45)

    For a random variable X, we denote X+ = max{X, 0}. Then

    NNj=1NE[YhNjEZ+;ANjC]=NE[(YhN1EZ)+;AN1C](note AN1C={|YhN1EZ|>N})E[(YhN1EZ)+2;AN1C]E[2(YhN1Z)2+2ZEZ2;AN1C].(3.46)

    By Proposition 3.3,

    E[(YhN1Z)2]Mt,x3hN2γβ0 as N.(3.47)

    Since ℙ[AN1] → 0 as N → ∞, we have

    E[ZEZ2;AN1C]0 as N.(3.48)

    Together with (3.47) and (3.48) we have

    1Nj=1NE[YhNjEZ+;ANjC]0.(3.49)

    Similarly, denote X := max{−X, 0}. Then,

    1Nj=1NE[YhNjEZ;ANjC]0.(3.50)

    Combining (3.49) and (3.50) we get (3.45) holds.

  3. We want to prove

    j=1NVar[ξNj;|ξNj|1]=j=1N1NVar[YhNjEZ;ANj]varZ as N.(3.51)

    We have the following equality

    VarYhNjEZ;ANj=E[|YhNjEZ|2IANj]E[YhNjEZIANj]2.(3.52)

    Note that

    1Nj=1NE[YhNjEZIANj]2=1Nj=1NE[YhNjZ+ZEZIANj]22Nj=1NE[YhNjZIANj]2+E[ZEZIANj]2.(3.53)

    Since

    E[YhNjZIANj]E[|YhNjZ|]Mt,x2hNγβ,(3.54)

    we have

    1Nj=1NE[YhNjZIANj]20 as N.(3.55)

    Note that 0E[(ZEZ)IAN1]=E[(ZEZ)IAN1C] and P[AN1C]0 as N → ∞, hence

    2Nj=1NE[ZEZIANj]2=2E[ZEZIAN1C]20 as N.(3.56)

    Together with (3.55) and (3.56) we know the right hand side of (3.53) converges to 0, and therefore from (3.52) we only need to prove

    1Nj=1NE[|YhNjEZ|2IANj]=E[|YhN1EZ|2IAN1]VarZ.(3.57)

    Note that

    E[(YhN1EZ)2]E[ZEZ2]=E[(YhN1+Z2EZ)(YhN1Z)](E[(YhN1+Z2EZ)2]E[(YhN1Z)2])12.(3.58)

    Since 𝔼[Z2] < ∞ and E[|YhNZ|2]<Mt,x3h2γβ, we know E[∣YhN2] have a uniform upper bound for all N. Hence E[(YhN1+Z2EZ)2] have a uniform upper bound for all N. Hence the right hand side of (3.60) converges to 0 as N → ∞.

    Therefore we only need to show

    E[|YhN1EZ|2IAN1C]0 as N.(3.59)

    In fact, this is true because

    E[|YhN1EZ|2IAN1C]2E[|YhN1Z|2IAN1C]+2E[|ZEZ|2IAN1C].(3.60)

Remark 3.4

The choice of hk is not unique. In fact, they only need to satisfy NhNγβ0 as N → ∞.

4 Simulation and algorithm

Now we study how the starting level t of the decreasing stable process and the starting point x of the stable process X influence the Monte Carlo estimator (1.4).

We set d = 1, α = 1/2, β = 3/2, and denote ā = ta. In Sections 4.1 and 4.2 we set ϕx=|x|12.

4.1 Unbiased FPED

Now the estimator is (1.5). Recall that XTtx=dTt1βX1+x=da¯τ1αβX1+x.

Algorithm 1

Sample uN(t, x)

1: u = 0;
2: fork = 1 : Ndo
3:   sample Y1;
4:   Tt=a¯τ1α;
5:   sample X1;
6:   X=Tt1βX1+x;
7:   u = u + ϕ(X);
8: end for
9: ū = u/N;
10: returnū.

First we set N = 105. Let x = 0 and ā increase from 1 to 10.

Fig. 4.1
Fig. 4.1

Figure 4.1 (a) shows that uN tends to increase as we increase ā. In fact, now we have

E[|XTtx|12]=E[a¯τ1α2β|X1|12]=a¯α2βE[τ1α2β]E[|X1|12].(4.1)

We can check our result by Figure 4.1 (b)uN/a¯α2β above. It is almost a constant, which means our algorithm is correct.

4.2 FPDE with bias

We set gx=|x|12.

Algorithm 2

Sample uNht,x

1: u = 0;
2: fork = 1 : Ndo
3:   sample Y1;
4:   Tt=a¯τ1α;
5:   sample X1j,j=1,,Tt/h;
6:   S = 0;
7:   X = x;
8:   sample X1;
9:   forj = 1 : ⌊Tt/hdo
10:     X=X+h1βX1j;
11:     S = S + hg(X);
12:   endfor
13:   X=X+TthTt/h1βX1;
14:   u = u + ϕ(X) + S;
15: end for
16: ū = u/N;
17: returnū.

Figure 4.2 (c) is the figure of uNh when x = 0, h = 0.01, N = 105 and we change a from 1 to 10.

Fig. 4.2
Fig. 4.2

Similarly, recall that Tt=da¯τ1α,

E[0TtgXsxds]=E[E[0Tt|Xs0|12ds|Tt]]=E[E[0Tt|X10|12s12βds|Tt]]=E[|X10|12]E[Tt1+12β]/1+12β=a¯α1+2β2βE[|X10|12]E[τ1α1+2β2β]/1+12β.(4.2)

As it is shown in Figure 4.2 (d), 1Nk=1Ni=1Ttk/hhg(Xtikk)/(a¯)2/3 is almost a constant.

Below is the figure of uNh when we fix ā = 5, and x = 0:0.1:10.

Fig. 4.3 a = 5, x = 0:0.1:10
Fig. 4.3

a = 5, x = 0:0.1:10

4.3 Confidence interval

For simplicity, we set ϕx1,gx=|x|12,ā = ta = 1, x = 0, h = 10−3. Recall our discussion in Remark 3.1, we only need the upper bound of 𝔼[Z(t, x)2] in this example. In fact, in Remark 3.3, we have already got the computable upper bound of 𝔼[Z(t, x)2]. Figure 4.4 presents the asymptotic confidence intervals at level 95%.

Fig. 4.4 Confidence intervals at level 95%
Fig. 4.4

Confidence intervals at level 95%

Acknowledgements

V.K. is supported by the Russian Science Foundation Project No 20-11-20119. F.L. is supported by the China Scholarship Council PhD award at Warwick. A.M. is supported by The Alan Turing Institute under the EPSRC Grant EP/N510129/1 and by the EPSRC Grant EP/P003818/1 and the Turing Fellowship funded by the Programme on Data-Centric Engineering of Lloyd’s Register Foundation.

References

[1] G.A. Anastassiou and I.K. Argyros, Intelligent Numerical Methods: Applications to Fractional Calculus. Springer (2016).10.1007/978-3-319-26721-0Search in Google Scholar

[2] D. Baleanu, K. Diethelm, E. Scalas and J.J. Trujillo, Fractional Calculus: Models and Numerical Methods. World Scientific, Hackensack, NJ (2017).Search in Google Scholar

[3] K. Burrage, A. Cardone, R. D’Ambrosio and B. Paternoster, Numerical solution of time fractional diffusion systems. Applied Numerical Mathematics116 (2017), 82–94.10.1016/j.apnum.2017.02.004Search in Google Scholar

[4] M.E. Hernández-Hernández, V. Kolokoltsov and L. Toniazzi, Generalised fractional evolution equations of Caputo type. Chaos, Solitons & Fractals102 (2017), 184–196.10.1016/j.chaos.2017.05.005Search in Google Scholar

[5] O. Kallenberg, Foundations of Modern Probability. Springer Science & Business Media (2016).Search in Google Scholar

[6] S. Ken-Iti, Lévy Processes and Infinitely Divisible Distributions. Cambridge University Press (1999).Search in Google Scholar

[7] V. Kiryakova, Generalized Fractional Calculus and Applications. Longman Sci. & Tech., Harlow; Copubl. in USA with John Wiley & Sons, Inc., New York (1994).Search in Google Scholar

[8] A.N. Kochubei and Y. Kondratiev, Fractional kinetic hierarchies and intermittency. Kinetic and Related Models10, No 3 (2017), 725–740.10.3934/krm.2017029Search in Google Scholar

[9] V. Kolokoltsov, Differential Equations on Measures and Functional Spaces. Birkhäuser (2019).10.1007/978-3-030-03377-4Search in Google Scholar

[10] V.N. Kolokoltsov, Generalized continuous-time random walks, subordination by hitting times, and fractional dynamics. Theory of Probability & Its Applications53, No 4 (2009), 594–609.10.1137/S0040585X97983857Search in Google Scholar

[11] V.N. Kolokoltsov, The probabilistic point of view on the generalized fractional partial differential equations. Fract. Calc. Appl. Anal. 22, No 3 (2019), 542–600; DOI: 10.1515/fca-2019-0033; https://www.degruyter.com/view/journals/fca/22/3/fca.22.issue-3.xml.10.1515/fca-2019-0033Search in Google Scholar

[12] L. Lv and L. Wang, Stochastic representation and Monte Carlo simulation for multiterm time-fractional diffusion equation. Advances in Mathematical Physics2020 (2020), Art. 1315426; doi:10.1155/2020/1315426.10.1155/2020/1315426Search in Google Scholar

[13] M.M. Meerschaert and A. Sikorskii, Stochastic Models for Fractional Calculus. Walter de Gruyter, Berlin (2011).10.1515/9783110258165Search in Google Scholar

[14] R. O’Donnell, Analysis of Boolean Functions. Cambridge University Press (2014).10.1017/CBO9781139814782Search in Google Scholar

[15] G. Samorodnitsky and M. Taqqu, Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance. CRC Press (1994).Search in Google Scholar

[16] V.V. Uchaikin and V.V. Saenko, Stochastic solution to partial differential equations of fractional orders. Sib. Zh. Vychisl. Mat. 6, No 2 (2003), 197–203.Search in Google Scholar

[17] P.N. Vabishchevich, Numerical solution of nonstationary problems for a space-fractional diffusion equation. Fract. Calc. Appl. Anal. 19, No 1 (2016), 116–139; DOI: 10.1515/fca-2016-0007; https://www.degruyter.com/view/journals/fca/19/1/fca.19.issue-1.xml">https://www.degruyter.com/view/journals/fca/19/1/fca.19.issue-1.xml.10.1515/fca-2016-0007Search in Google Scholar

Received: 2020-11-27
Revised: 2020-12-17
Published Online: 2021-01-29
Published in Print: 2021-02-23

© 2021 Diogenes Co., Sofia

Downloaded on 23.4.2024 from https://www.degruyter.com/document/doi/10.1515/fca-2021-0012/html
Scroll to top button