1 Introduction

In this article we consider the stochastic fractional heat equation

$$\begin{aligned} \frac{\partial u}{\partial t} (t,x)= - (-\Delta )^{\frac{\alpha }{2}} u(t,x) + \sigma (u(t,x)) {\dot{W}} (t,x), \qquad t\ge 0, x \in {\mathbb {R}}^d \end{aligned}$$
(1.1)

with initial condition \(u(0, x)\equiv 1\). Here \(\sigma \) is assumed to be a Lipschitz continuous function with the property \(\sigma (1)\ne 0\) and \(- (-\Delta )^{\frac{\alpha }{2}}\) is the fractional Laplace operator.

Fractional Laplace operator can be viewed as a generalization of spatial derivatives and classical Sobolev spaces into fractional order derivatives and fractional Sobolev spaces, and together with the associated equations it has numerous applications in different fields including fluid dynamics, quantum mechanics, and finance to simply name a few. For detailed discussions and different equivalent formal definitions, see [10] and the references therein.

In the present article we provide a general existence and uniqueness result to equation (1.1) that covers many different choices of the (Gaussian) random perturbation \({\dot{W}}\). We note that in this context, existence and uniqueness of the solution to (1.1) can be deduced from general results of [9] (although in [9] it was assumed that the spatial covariance of \({\dot{W}}\) has a spectral density). However, for the reader’s convenience we present and prove the existence and uniqueness in our particular setting. Our main contribution is in providing quantitative limit theorems in a general context. These results cover three different important situations: when \({\dot{W}}\) is a standard space–time white noise, when \({\dot{W}}\) is a white-colored noise, i.e. a Gaussian field that behaves as a Wiener process in time and it has a non-trivial spatial covariance given by the Riesz kernel of order \(\beta <\min (\alpha ,d)\), and when \({\dot{W}}\) is a white-colored noise with spatial covariance given by an integrable and bounded function \(\gamma \).

Our results continue the line of research initiated in [12, 13] where a similar problem for the stochastic heat equation on \({\mathbb {R}}\) (or \({\mathbb {R}}^d\), respectively) driven by a space–time white noise (or spatial covariance given by the Riesz kernel, respectively) was considered. As such, we extend the results presented in [12, 13] as the main theorems of [12, 13] can be recovered from ours by simply plugging in \(\alpha =2\). Proof-wise our methods are similar to those of these two references. However, we stress that in our case we do not have fine properties of the heat kernel at our disposal, and hence one has to be more careful in the computations. In particular, our main contribution is the bound for the norm of the Malliavin derivative (cf. Proposition 5.2) that differs from the classical Laplacian case. Moreover, we provide a general approach how such bounds can be achieved, based on the boundedness properties of the convolution operator with the spatial covariance \(\gamma \) (see Proposition 3.2) together with the semigroup property and some integrability of the Green kernel.

On a related literature, we also mention [8] studying the case of stochastic wave equation on \({\mathbb {R}}^d\). In this article, the driving noise was assumed to be Gaussian multiplicative noise that is white in time and colored in space such that the correlation in the space variable is described by the Riesz kernel. As such, our results complements the above mentioned works studying the stochastic heat and wave equation.

The rest of the paper is organised as follows. In Sect. 2 we describe and discuss our main results. In particular, we provide the existence and uniqueness result for the solution, and provide quantitative central limit theorems for the spatial average in the mentioned particular cases. In Sect. 3 we recall some preliminaries, including some basic facts on Stein’s method and Malliavin calculus that are used to prove our results, together with some basic facts on the Green kernel related to the fractional heat equation, and a key inequality proved in Proposition 3.2. Proofs of our main results are provided in Sects. 4 and 5.

2 Main results

In this section we introduce and discuss our main results concerning equation (1.1). Throughout the article, we assume that \({\dot{W}}\) is a centered Gaussian noise with a covariance given by

$$\begin{aligned} {\mathbb {E}}[{\dot{W}}(t,x){\dot{W}}(s,y)] = \delta _0(t-s)\gamma (x-y), \end{aligned}$$
(2.1)

where \(\delta _0\) denotes the Dirac delta function and \(\gamma \) is a nonnegative and nonnegative definite symmetric measure. The spectral measure \({\widehat{\gamma }}(d\xi )\) is defined through the Fourier transform of the measure \(\gamma \):

$$\begin{aligned} {\widehat{\gamma }}(d\xi )= \left( {\mathcal {F}}\gamma \right) (d\xi ) := \int _{{\mathbb {R}}^d} e^{-i\langle \xi ,y\rangle }d\gamma (y). \end{aligned}$$

The existence of the solution to (1.1) is guaranteed if a fractional version (2.4) of Dalang’s condition is satisfied. In particular, this is the case on all examples mentioned in the introduction.

We next introduce the Green kernel (or fundamental solution) associated to the operator \(- (-\Delta )^{\frac{\alpha }{2}}\), where \(\alpha \in (0,2]\). This kernel, denoted in the sequel by \(G_{\alpha }\), is defined via its Fourier transform

$$\begin{aligned} \left( {\mathcal {F}} G_{\alpha }(t, \cdot )\right) (\xi ) =e ^ {-t\vert \xi \vert ^ {\alpha } }, \xi \in {\mathbb {R}}^d, t\ge 0 \end{aligned}$$
(2.2)

for \(\alpha >0\) (here and in the sequel, \(\vert \cdot \vert \) denotes the Euclidean norm). While explicit formulas for \(G_\alpha (t,x)\) are known only in the special cases \(\alpha = 1\) (the Poisson kernel) and \(\alpha =2\) (the heat kernel), the kernel \(G_\alpha (t,x)\) admits many desirable properties. Some of them that are suitable for our purposes are recorded in Sect. 3.

Similarly to the classical stochastic heat equation case, the solution to the stochastic equation (1.1) can be expressed in terms of \(G_{\alpha }\). That is, the mild solution is a measurable random field \(\left( u(t,x), t\ge 0, x \in {\mathbb {R}}^d \right) \) which satisfies

$$\begin{aligned} u(t,x) =1+ \int _{0} ^ {t} \int _{{\mathbb {R}}^d} G_{\alpha } (t-s, x-y) \sigma (u(s, y)) W (ds, dy), \end{aligned}$$
(2.3)

where the stochastic integral is understood in the Walsh sense [21]. The following existence and uniqueness result holds. Taking into account that the claim follows as a special case of [9, Theorem 1.2] provided that \({\hat{\gamma }}(d\xi )\) is absolutely continuous, the result and condition (2.4) are not at all surprising.

Theorem 2.1

Suppose that the Fourier transform \(\widehat{\gamma }= {\mathcal {F}} \gamma \) satisfies the fractional Dalang’s condition:

$$\begin{aligned} \int _{{\mathbb {R}}^d} \frac{{\widehat{\gamma }}(d\xi )}{\beta +|\xi |^\alpha } < \infty , \end{aligned}$$
(2.4)

for some (and hence for all) \(\beta >0\). Then Eq. (1.1) admits a unique mild solution given by (2.3). Moreover, for any \(p\ge 1\) and any \(T>0\) we have

$$\begin{aligned} \sup _{t\in (0,T],y\in {\mathbb {R}}^d} {\mathbb {E}}[ \left| u(t,x)\right| ^p] < \infty . \end{aligned}$$

Remark 1

We present our results only in the case of the initial condition \(u(0, x)\equiv 1\) which makes our presentation and notation easier. We stress however, that with a little bit of extra effort our results could be extended to cover more general initial conditions. Actually, our existence result can be generalised to cover even the cases of initial conditions given by measures (satisfying certain suitable conditions). Indeed, then the mild solution is given by

$$\begin{aligned} u(t,x) = \int _{{\mathbb {R}}^d}G_\alpha (t,x-y)u_0(dy) + \int _0^t \int _{{\mathbb {R}}^d}G_\alpha (t-s,x-y)\sigma (u(s,y))W(ds,dy), \end{aligned}$$

where \(u_0(dy)\) denotes the initial measure. In this case, one needs to require conditions on \(u_0(dy)\) as well, in addition to (2.4). In particular, the integral \(\int _{{\mathbb {R}}^d}G_\alpha (t,x-y)u_0(dy)\) above should exists. For a detailed exposure on the topic in the case of the stochastic heat equation (\(\alpha =2\)), we refer to [4]. Similarly, in the spirit of [13, Corollary 3.3], our approximation results can be generalised to the case of \(u(0,x) = f(x)\) with suitable assumptions on the function f, once a comparison principle is established.

Throughout the article, for a function f and a (signed) measure \(\mu \) we denote by \(f*\mu \) the convolution defined by

$$\begin{aligned} (f *\mu )(y) = \int _{{\mathbb {R}}^d} f(y-x)d\mu (x), \end{aligned}$$
(2.5)

provided it exists. If \(\mu \) is absolutely continuous, then \(d\mu (x) = \mu (x)dx\) for some function \(\mu \) and we recover the classical convolution for integrable functions

$$\begin{aligned} (f *\mu )(y) = \int _{{\mathbb {R}}^d} f(y-x)\mu (x)dx. \end{aligned}$$

If \(\mu \) can be viewed as a function, the well-known Young convolution inequality states that for \(\frac{1}{p} + \frac{1}{q} = \frac{1}{r}+1\) with \(1\le p, q\le r\le \infty \), we have

$$\begin{aligned} \Vert f*\mu \Vert _{L^{r}({\mathbb {R}}^d)} \le \Vert f\Vert _{L^{p}({\mathbb {R}}^d)} \Vert \mu \Vert _{L^{q}({\mathbb {R}}^d)}. \end{aligned}$$
(2.6)

In particular, this gives us, for any \(p\ge 1\),

$$\begin{aligned} \Vert f*\mu \Vert _{L^{p}({\mathbb {R}}^d)} \le \Vert f\Vert _{L^{p}({\mathbb {R}}^d)} \Vert \mu \Vert _{L^{1}({\mathbb {R}}^d)}. \end{aligned}$$
(2.7)

More generally, if \(\mu \) is a finite measure, a simple mollification argument shows that (2.7) remains valid with \(\Vert \mu \Vert _{L^{1}({\mathbb {R}}^d)}\) replaced by \(\mu ({\mathbb {R}}^d)\) (see, e.g. [1, Proposition 3.9.9]). Finally, by \(I_{d-\beta }\) we denote the Riesz potential defined by, for \(0<\beta < d\),

$$\begin{aligned} (I_{d-\beta }f)(x) = \int _{{\mathbb {R}}^d} f(y)|x-y|^{-\beta }dy = (K_{d-\beta } *f)(x), \end{aligned}$$

where \(K_{d-\beta }(y) = |y|^{-\beta }\). More generally, Riesz potential \(I_{d-\beta } \mu \) with respect to a measure \(\mu \) is defined through the convolution

$$\begin{aligned} \left( I_{d-\beta } \mu \right) (x) = (K_{d-\beta } *\mu )(x) = \int _{{\mathbb {R}}^d} |x-y|^{-\beta }d\mu (y). \end{aligned}$$

In order to simplify our notation, we also define \(I_{d-\beta }\) for \(\beta =d\) simply as an identity operator.

We also provide approximation results for the spatial average over an Euclidean ball of radius R, denoted by \(B_R\). For these purposes we require some more refined information on the covariance \(\gamma \) instead of the general condition (2.4).

Assumption 2.2

We assume that \(\gamma \) is given by the Riesz potential \(\gamma = I_{d-\beta } \mu \), where \(0<\beta \le d\) and \(\mu \) is a finite symmetric measure. Moreover, one of the following conditions holds:

  1. (i)

    \(\beta < \alpha \wedge d\).

  2. (ii)

    \(\beta =d=1\) and \(\alpha >1\).

  3. (iii)

    \(\beta =d \ge \alpha \) and \(\mu =\gamma \) is absolutely continuous, i.e. \(d\gamma (x)= \gamma (x)dx\), with \(\gamma \in L^r({\mathbb {R}}^d)\) for some \(r>\frac{d}{\alpha }\). In addition, if \(r>2\), we impose Dalang’s condition (2.4).

Remark 2

Condition \(\beta <\alpha \) in Case (i) implies that Dalang’s condition (2.4) is satisfied. Indeed, we recall that a Fourier transform \({\widehat{\mu }}\) of a finite measure \(\mu \) is a bounded continuous function. Consequently, by recalling the convolution theorem \(\widehat{f *\mu } = {\widehat{f}}{\widehat{\mu }}\) and the fact that the Riesz potential \(I_{d-\beta }\) is a Fourier multiplier, we obtain

$$\begin{aligned} {\widehat{\gamma }}(d\xi ) =c_{d,\beta } |\xi |^{\beta -d}\widehat{\mu }(\xi )d\xi , \end{aligned}$$
(2.8)

from which we deduce (2.4). Dalang’s condition (2.4) clearly holds in Case (ii). Finally, in Case (iii) we can deduce (2.4) from the Hausdorff-Young inequality if \(r\le 2\).

Remark 3

By carefully examining our proof one can see that our results remains valid provided that \(\gamma = I_{d-\beta }\mu \) satisfies Dalang’s condition and the statement in Proposition 3.2 holds for suitable number 2q.

Case (ii) covers the case of the space–time white noise, where \(\gamma \) is given by the Dirac delta \(\gamma (y) = \delta _0(y)\). The case \(\gamma (y) = |y|^{-\beta }\) corresponds to the noise with spatial correlation given by the Riesz kernel, studied in the heat equation case \(\alpha =2\) in [13]. In our terminology, this is included in Case (i) where \(\gamma = I_{d-\beta }\delta _0\).

Recall that the total variation distance between random variables (or associated probability distributions) is given by

$$\begin{aligned} d_{\mathrm{TV}}(F, Z) : = \sup \Big \{ P(F\in A) - P(Z\in A) \,:\, A \subset {\mathbb {R}}\,\,\,\, \text {Borel sets} \Big \}. \end{aligned}$$
(2.9)

Our first main results concern the following two quantitative central limit theorems for the spatial average of the solution.

Theorem 2.3

Let \(\gamma \) satisfy Assumption 2.2 and let u be the solution to the stochastic fractional heat equation (1.1). Then for every \(t>0\) there exists a constant C, depending solely on t, \(\alpha \), \(\sigma \), and the covariance \(\gamma \), such that

$$\begin{aligned} d_\mathrm{TV}\left( \frac{1}{\sigma _R} \int _ {B_R} \big [ u(t,x) - 1 \big ]\,dx, ~Z\right) \le CR^{-\frac{\beta }{2}}\,, \end{aligned}$$

where \(Z\sim N(0,1)\) is a standard normal random variable, and \(\sigma _R^2 = \mathrm{Var} \big ( \int _ {B_R} [ u(t,x) - 1 ]\,dx \big ) \sim R^{2d-\beta }\), as \(R\rightarrow \infty \).

Remark 4

While we have stated our result concerning only a ball \(B_R\) centered at the origin, we stress that with exactly the same arguments, one can replace \(B_R\) with some other body \(RA_0 = \{Ra : a \in A_0\}\). This affects only the normalization constants. Moreover, as in the heat case (cf. [13, Remark 3]), one can allow the center of the ball \(a_R\) to vary in R as well. This fact follows easily from the stationarity.

Following the spirit of the mentioned references, we also provide functional version of Theorem 2.3.

Theorem 2.4

Let \(\gamma \) satisfy Assumption 2.2 and let u be the solution to the stochastic fractional heat equation (1.1). Then

$$\begin{aligned} \left\{ R^{\frac{\beta }{2}-d}\int _{B_R} \big [ u(t,x) -1 \big ] \,dx \right\} _{t\in [0,T]} \Rightarrow \left\{ \int _0^t \varrho (s) dY_s\right\} _{t\in [0,T]} \,, \end{aligned}$$

as \(R\rightarrow \infty \), where Y is a standard Brownian motion, the weak convergence takes place on the space of continuous functions C([0, T]), and \(\varrho (s)\) is given by;

  • If \(\beta < d\), then \(\varrho (s) = \sqrt{\mu \left( {\mathbb {R}}^d\right) \int _{B_1^2}|x-x'|^{-\beta }dxdx'}{\mathbb {E}}[\sigma (u(s,y))]\).

  • If \(\beta =d\), then \(\varrho (s) = \sqrt{|B_1|\int _{{\mathbb {R}}^d} {\mathbb {E}}\left[ \sigma (u(s,0))\sigma (u(s,z))\right] d\mu (z)}\).

Note that \(\varrho \) depends on \(\alpha \) through the the solution u(sy), see Remark 6.

Remark 5

We prove later (see Lemma 5.5) that

$$\begin{aligned} \int _{{\mathbb {R}}^d} {\mathbb {E}}\left[ \sigma (u(s,0))\sigma (u(s,z))\right] d\mu (z) \ge \left[ {\mathbb {E}}[\sigma (u(s,y))]\right] ^2. \end{aligned}$$

Under our initial condition \(u(0,x) \equiv 1\), we may hence apply the arguments of [8, Lemma 3.4] to see the equivalence

$$\begin{aligned} \sigma (1)=0 \Leftrightarrow \sigma _R = 0,\forall R>0 \Leftrightarrow \sigma _R=0 \text{ for } \text{ some } R>0 \Leftrightarrow \lim _{R\rightarrow \infty } \sigma _R^2 R^{\beta -2d} = 0. \end{aligned}$$

Hence \(\sigma (1)\ne 0\) is a natural condition that guarantees \(\sigma _R>0\) for all \(R>0\). Note also that \(\sigma (1)\ne 0\) is necessary to exclude the trivial solution \(u(t,x)\equiv 1\) by using the Picard iteration.

Example 1

Suppose \(\mu = \delta _0\) and let \(\beta = d = 1\) and \(\alpha >1\). This case corresponds to the space–time white noise, and now

$$\begin{aligned} \varrho (s) = \sqrt{|B_1|\int _{{\mathbb {R}}^d}{\mathbb {E}}\left[ \sigma (u(s,0))\sigma (u(s,z))\right] d\mu (z)} = \sqrt{2{\mathbb {E}}\sigma ^2(u(s,0))}. \end{aligned}$$

In the case \(\alpha =2\), we thus recover the results of [12].

Example 2

Suppose \(\beta <d\) and let \(\mu = \delta _0\). This case corresponds to the white-colored case with the spatial covariance given by the Riesz kernel. Now

$$\begin{aligned} \varrho (s) = \sqrt{\int _{B_1^2}|x-x'|^{-\beta }dxdx'}{\mathbb {E}}[\sigma (u(s,y))] \end{aligned}$$

and consequently, for \(\alpha = 2\) we obtain the results of [13].

Remark 6

We emphasis that the additional parameters associated to the fractional operator (i.e. \(\alpha \)) does not affect the above results, except for the constant quantities through the solution u. Indeed, the renormalization rate and the total variation distance are, up to multiplicative constants, the same as in the case \(\alpha =2\) corresponding to the classical stochastic heat equation. Similarly, the limiting normal distribution in Theorem 2.3 and the limiting time-changed Brownian motion in Theorem 2.4 are the same as in the case \(\alpha =2\). Since \(G_\alpha \) is intimately connected to a stable Lévy process, this might appear surprising as one might expect stable limiting laws. However, the Gaussian form of the limiting distribution is connected to the Gaussian nature of the noise \({\dot{W}}\), while the Green kernel \(G_\alpha (t,x)\) (associated to a stable process) is simply a deterministic function that has suitable scaling in the time variable t and sufficient integrability in the spatial variable x. In contrast, one could expect stable limiting law, even in the case \(\alpha =2\) when \(G_\alpha \) is the (Gaussian) heat kernel, if the noise \({\dot{W}}\) is driven by a suitable Lévy process.

3 Preliminaries

In this section we present some preliminaries that are required for the proofs of our main theorems. In particular, we recall some facts on Malliavin calculus and Stein’s method together with some basic properties of the fractional Green kernel. Finally, in Proposition 3.2 we present a basic inequality that allows us to derive a bound for the Malliavin derivative.

3.1 Malliavin calculus and Stein’s method

We start by introducing the Gaussian noise that governs the stochastic fractional heat equation (1.1).

Denote by \(C_{c}^{\infty }\left( [0, \infty ) \times {\mathbb {R}}^d\right) \) the class of \(C^{\infty }\) functions on \( [0, \infty ) \times {\mathbb {R}}^d\) with compact support. We consider a Gaussian family of centered random variables

$$\begin{aligned} \left( W(\varphi ), \varphi \in C^{\infty }_{c} \left( [0, \infty ) \times {\mathbb {R}}^d\right) \right) \end{aligned}$$

on some complete probability space \(\left( \Omega , {\mathcal {F}}, P\right) \) such that

$$\begin{aligned} {\mathbb {E}}[W(\varphi ) W (\psi ) ]= \int _{0} ^{\infty } \int _{{\mathbb {R}}^d} \int _{{\mathbb {R}}^d}\varphi (s, y) \psi (s, y') \gamma (y-y')dy dy'ds := \langle \varphi , \psi \rangle _{{\mathfrak {H}}}.\nonumber \\ \end{aligned}$$
(3.1)

We stress again that, in general, \(\gamma \) is not a function, and hence (3.1) should be understood as

$$\begin{aligned} {\mathbb {E}}[W(\varphi ) W (\psi ) ]= \int _{0} ^{\infty } \int _{{\mathbb {R}}^d} \varphi (s, y) \left[ \psi (s,\cdot ) *\gamma \right] (y) dy ds. \end{aligned}$$
(3.2)

We denote by \({\mathfrak {H}}\) the Hilbert space defined as the closure of \(C_{c}^{\infty }\left( [0, \infty ) \times {\mathbb {R}}^d\right) \) with respect to the inner product (3.1). By density, we obtain an isonormal process \((W(\varphi ), \varphi \in {\mathfrak {H}})\), which consists of a Gaussian family of centered random variable such that, for every \(\varphi , \psi \in {\mathfrak {H}}\),

$$\begin{aligned} {\mathbb {E}}[W(\varphi ) W (\Psi ) ]= \langle \varphi , \psi \rangle _{{\mathfrak {H}}}. \end{aligned}$$

The Gaussian family \((W(\varphi ), \varphi \in {\mathfrak {H}})\) is usually called a white-colored noise because it behaves as a Wiener process with respect to the time variable \(t\in [0, \infty )\) and it has a spatial covariance given by the measure \(\gamma \).

Let us introduce the filtration associated to the random noise W. For \(t>0\), we denote by \({\mathcal {F}}_{t}\) the sigma-algebra generated by the random variables \(W(\varphi )\), with \(\varphi \in {\mathfrak {H}}\) having its support included in the set \([0, t]\times {\mathbb {R}}^d\). For every random field \((X(s,y), s\ge 0, y\in {\mathbb {R}}^d)\), jointly measurable and adapted with respect to the filtration \(\left( {\mathcal {F}}_{t}\right) _{t\ge 0} \), satisfying

$$\begin{aligned} {\mathbb {E}}\left[ \Vert X\Vert _{{\mathfrak {H}}} ^{2}\right] <\infty , \end{aligned}$$

we can define stochastic integrals with respect to W of the form

$$\begin{aligned} \int _{0} ^{\infty } \int _{{\mathbb {R}}^d} X(s, y) W (ds, dy) \end{aligned}$$

in the sense of Dalang-Walsh (see [6, 21]). This integral satisfies the Itô-type isometry

$$\begin{aligned} {\mathbb {E}}\left[ \left( \int _{0} ^{\infty } \int _{{\mathbb {R}}^d} X(s, y) W (ds, dy)\right) ^{2} \right] = {\mathbb {E}}\left[ \Vert X\Vert _{{\mathfrak {H}}} ^{2}\right] . \end{aligned}$$
(3.3)

The Dalang-Walsh integral also satisfies the following version of the Burkholder-Davis-Gundy inequality: for any \(t\ge 0\) and \(p\ge 2\),

$$\begin{aligned}&\left| \left| \int _{0} ^{\infty } \int _{{\mathbb {R}}^d} X(s, y) W (ds, dy)\right| \right| _{p} ^{2}\nonumber \\&\quad \le c_{p} \int _{0} ^{t} \int _{{\mathbb {R}}^d}\int _{{\mathbb {R}}^d }\Vert X(s, y) X(s, y')\Vert _{\frac{p}{2}} \gamma (y-y')dydy'ds. \end{aligned}$$
(3.4)

Let us next describe the basic tools from Malliavin calculus needed in this work. We introduce \(C_p^{\infty }({\mathbb {R}}^n)\) as the space of smooth functions with all their partial derivatives having at most polynomial growth at infinity, and \({\mathcal {S}}\) as the space of simple random variables of the form

$$\begin{aligned} F = f(W(h_1), \dots , W(h_n)), \end{aligned}$$

where \(f\in C_p^{\infty }({\mathbb {R}}^n)\) and \(h_i \in {\mathfrak {H}}\), \(1\le i \le n\). Then the Malliavin derivative DF is defined as \({\mathfrak {H}}\)-valued random variable

$$\begin{aligned} DF=\sum _{i=1}^n \frac{\partial f}{\partial x_i} (W(h_1), \dots , W(h_n)) h_i\,. \end{aligned}$$
(3.5)

For any \(p\ge 1\), the operator D is closable as an operator from \(L^p(\Omega )\) into \(L^p(\Omega ; {\mathfrak {H}})\). Then \({\mathbb {D}}^{1,p}\) is defined as the completion of \({\mathcal {S}}\) with respect to the norm

$$\begin{aligned} \Vert F\Vert _{1,p} = \left( {\mathbb {E}}[|F|^p] + {\mathbb {E}}( \Vert D F\Vert ^p_{\mathfrak {H}}) \right) ^{1/p}\,. \end{aligned}$$

The adjoint operator \(\delta \) of the derivative is defined through the duality formula

$$\begin{aligned} {\mathbb {E}}(\delta (u) F) = {\mathbb {E}}( \langle u, DF \rangle _{\mathfrak {H}}), \end{aligned}$$
(3.6)

valid for any \(F \in {\mathbb {D}}^{1,2}\) and any \(u\in \mathrm{Dom} \, \delta \subset L^2(\Omega ; {\mathfrak {H}}) \). The operator \(\delta \) is also called the Skorokhod integral since, in the case of the standard Brownian motion, it coincides with an extension of the Itô integral introduced by Skorokhod (see, e.g. [11, 19]). In our context, any adapted random field X which is jointly measurable and satisfies (3.3) belongs to the domain of \(\delta \), and \(\delta (X)\) coincides with the Walsh integral:

$$\begin{aligned} \delta (X) = \int _0^\infty \int _{{\mathbb {R}}} X(s,y) W(d s, d y). \end{aligned}$$

This allows us to represent the solution u(tx) to (1.1) as a Skorokhod integral.

The proofs of our main results are based on Malliavin-Stein approach, introduced by Nourdin and Peccati in [16] (see also the book [17]). In particular, we apply the following result to obtain rate of convergence in the total variation distance (see [20] and also [12, 18]).

Proposition 3.1

If F is a centered random variable in the Sobolev space \({\mathbb {D}}^{1,2}\) with unit variance such that \(F = \delta (v)\) for some \({\mathfrak {H}}\)-valued random variable v belonging to the domain of \(\delta \), then, with \(Z \sim N (0,1)\),

$$\begin{aligned} d_\mathrm{TV}(F, Z) \le 2 \sqrt{\mathrm{Var} \left( \langle DF, v\rangle _{\mathfrak {H}}\right) }. \end{aligned}$$
(3.7)

3.2 On fractional Green kernel

We recall some useful properties of the kernel \(G_{\alpha }\) defined through (2.2). For details, we refer to [2, 5, 7].

  1. (1)

    For every \(t>0\), \(G_{\alpha }(t, \cdot ) \) is the density of a d-dimensional Lévy stable process at time t. In particular, we have

    $$\begin{aligned} \int _{{\mathbb {R}}^d} G_{\alpha }(t,x)dx=1. \end{aligned}$$
    (3.8)
  2. (2)

    For every t, the kernel \(G_{\alpha }(t,x)\) is real valued, positive, and symmetric in x.

  3. (3)

    The operator \(G_{\alpha } \) satisfies the semigroup property, i.e.

    $$\begin{aligned} G_{\alpha } (t+s, x)= \int _{{\mathbb {R}}^{d}} G_{\alpha } (t, z ) G_{\alpha } (s, x-z) dz \end{aligned}$$
    (3.9)

    for \(0<s<t\) and \(x\in {\mathbb {R}}^d\).

  4. (4)

    \(G_{\alpha }\) is infinitely differentiable with respect to x, with all the derivatives bounded and converging to zero as \(\vert x\vert \rightarrow \infty \). Moreover, we have the scaling property

    $$\begin{aligned} G_{\alpha }(t,x)= t^ {-\frac{d}{\alpha } } G_{\alpha }(1, t ^ {-\frac{1}{\alpha }}x). \end{aligned}$$
    (3.10)
  5. (5)

    There exist two constants \(0<K_{\alpha }' < K_{\alpha } \) such that

    $$\begin{aligned} K' _{\alpha } \frac{1}{\left( 1+ \vert x\vert \right) ^ {d+\alpha }}\le \left| G_{\alpha } (1,x)\right| \le K_{\alpha } \frac{1}{ \left( 1+ \vert x\vert \right) ^ {d+\alpha }} \end{aligned}$$
    (3.11)

    for all \(x\in {\mathbb {R}}^d\). Together with the scaling property, this further translates into

    $$\begin{aligned} K' _{\alpha } \frac{t^{-\frac{d}{\alpha }}}{\left( 1+ \vert t^{-\frac{1}{\alpha }}x\vert \right) ^ {d+\alpha }}\le \left| G_{\alpha } (t,x)\right| \le K_{\alpha } \frac{t^{-\frac{d}{\alpha }}}{ \left( 1+ \vert t^{-\frac{1}{\alpha }}x\vert \right) ^ {d+\alpha }}. \end{aligned}$$
    (3.12)

3.3 A basic inequality

The following proposition contains an inequality that plays a fundamental role in the proof of the estimates of the p-norm of the Malliavin derivative.

Proposition 3.2

Suppose that the covariance \(\gamma \) satisfies Assumption 2.2. Then, there exists a number \(2q \in \left( 1,\frac{2d}{2d-\alpha } \wedge \frac{d+\alpha }{d}\right) \) such that for any functions \(f,g \in L^{2q}({\mathbb {R}}^d)\) we have

$$\begin{aligned} \int _{{\mathbb {R}}^d} f(y) \left[ g *\gamma \right] (y)dy \le C \Vert f\Vert _{L^{2q}({\mathbb {R}}^d)}\Vert g\Vert _{L^{2q}({\mathbb {R}}^d)}. \end{aligned}$$
(3.13)

Remark 7

The requirement \(2q < \frac{2d}{2d-\alpha }\) ensures that

$$\begin{aligned} \kappa =\frac{2d}{\alpha }\left( 1-\frac{1}{2q}\right) <1, \end{aligned}$$
(3.14)

while the requirement \(2q < \frac{d+\alpha }{d}\) ensures that \(G^{\frac{1}{2q}}(1,x)\) is integrable. Note also that \(\frac{d+\alpha }{d} \le \frac{2d}{2d-\alpha }\) only if \(d\le \alpha \). Since \(\alpha \le 2\), this can happen only in the one-dimensional case \(d=1\) or in the heat case \(\alpha =2\) and \(d=1,2\). In the latter however, \(G^{\frac{1}{2q}}(1,x)\) is integrable regardless of the value 2q and consequently, our results can be applied in that case as well under a condition \(2q \in \left( 1,\frac{2d}{2d-2}\right) \).

Proof of Proposition 3.2

We decompose the proof into the three possible cases from Assumption 2.2:

Case (i)::

Taking \(2q=\frac{2d}{2d-{\beta }}\) (recall \(\beta < \alpha \wedge d\)) and using Hölder’s inequality, we obtain

$$\begin{aligned} \int _{{\mathbb {R}}^{2d}} f(x) [ g*\gamma ] (x) dx \le \Vert f\Vert _{L^{2q}({\mathbb {R}}^d)} \Vert g*\gamma \Vert _{L^{2q/(2q-1)}({\mathbb {R}}^d)}. \end{aligned}$$

Notice that \(g*\gamma =( I_{d-\beta } g) * \mu \). Therefore, it follows from (2.7) that

$$\begin{aligned} \Vert g*\gamma \Vert _{L^{2q/(2q-1)}({\mathbb {R}}^d)} \le \mu ({\mathbb {R}}^d) \Vert I_{d-\beta } g \Vert _{L^{2q/(2q-1)}({\mathbb {R}}^d)}. \end{aligned}$$

We then conclude the proof using the fact that \(2q=\frac{2d}{2d-{\beta }}\) and applying the following Hardy-Littlewood-Sobolev inequality (see e.g. [15] and references therein): for \(1<p<r<\infty \) satisfying \(\frac{1}{r} = \frac{1}{p} - \frac{d-\beta }{d}\), we have

$$\begin{aligned} \Vert I_{d-\beta } g\Vert _{L^{r}({\mathbb {R}}^d)} \le C\Vert g\Vert _{L^{p}({\mathbb {R}}^d)}. \end{aligned}$$
(3.15)
Case (ii):

: Suppose \(\beta =d\). By Young’s inequality (2.7) and Hölder’s inequality, we get

$$\begin{aligned} \int _{{\mathbb {R}}^d} f({y}) \left[ g *\gamma \right] (y)dy \le \Vert f\Vert _{L^{2}({\mathbb {R}}^d)}\Vert g *\gamma \Vert _{L^{2}({\mathbb {R}}^d)} \le C \Vert f\Vert _{L^{2}({\mathbb {R}}^d)}\Vert g \Vert _{L^{2}({\mathbb {R}}^d)}. \end{aligned}$$

Consequently, one can always choose \(q=1\) in (3.13). However, then \(2q < \frac{2d}{2d-\alpha }\wedge \frac{d+\alpha }{d}\) only if \(\alpha > d\). Taking into account the fact \(\alpha \in (0,2]\), this forces \(d=1\) and \(\alpha >1\). In conclusion, in the one-dimensional case and for \(\alpha >1\) we obtain the estimate (3.13) with \(q=1\), which completes the proof of Case (ii).

Case (iii)::

Let \(\beta =d\ge \alpha \) and suppose that \(\gamma \) is absolutely continuous with a density \(\gamma \in L^r({\mathbb {R}}^d)\), where \(r>\frac{d}{\alpha }\). In this case we choose \(2q=\frac{2r}{2r-1}\). Clearly \(2q>1\). Moreover, condition \(r>\frac{d}{\alpha }\) implies \(2q < \frac{2d}{2d-\alpha }\) and \(\frac{2d}{2d-\alpha } \le \frac{d+\alpha }{d}\) because \(d \ge \alpha \). Finally, Hölder’s inequality and Young’s inequality (2.6) gives us

$$\begin{aligned} \int _{{\mathbb {R}}^d}{\int _{{\mathbb {R}}^d}} f(x) g(y) \gamma (x-y) dx dy \le \Vert f\Vert _{L^{2q}({\mathbb {R}}^d)} \Vert g\Vert _{L^{2q}({\mathbb {R}}^d)}\Vert \gamma \Vert _{L^{r}({\mathbb {R}}^d)}. \end{aligned}$$

\(\square \)

4 Proof of Theorem 2.1

For \(t\ge 0\), we denote

$$\begin{aligned} I(t)= \int _{{\mathbb {R}}^{d} } \int _{{\mathbb {R}}^{d} } G_{\alpha } (t,y )G_{\alpha } (t,y' )\gamma ( y-y ')dy'dy. \end{aligned}$$
(4.1)

Taking the Fourier transform and using (2.2), we see that I(t) can equally be given by

$$\begin{aligned} I(t)= \int _{{\mathbb {R}}^{d}} e ^{-2t\vert \xi \vert ^{\alpha }} {\widehat{\gamma }}(d\xi ). \end{aligned}$$
(4.2)

Suppose now that \(\gamma \) satisfies (2.4). For \(\beta >0\), we define a function \(\Upsilon (\beta )\) by

$$\begin{aligned} \Upsilon (\beta ) := \int _0^\infty e^{-\beta t}I(t)dt = \int _{{\mathbb {R}}^d} \frac{{\widehat{\gamma }}(d\xi )}{\beta +2 |\xi |^\alpha }. \end{aligned}$$

Clearly, \(\Upsilon \) is non-negative, decreasing in \(\beta \), and \(\lim _{\beta \rightarrow \infty } \Upsilon (\beta )=0\).

Before proving Theorem 2.1 we introduce the following technical lemma that can be viewed as a fractional version of [4, Lemma 2.5].

Lemma 4.1

Let I(t) be given by (4.1) and, for given \(\iota >0\), let \(h_n\) be defined recursively by \(h_0(t) = 1\), and for \(n\ge 1\)

$$\begin{aligned} h_{n}(t) = \iota \int _0^t h_{n-1}(s)I(t-s)ds. \end{aligned}$$

Then for any \(p\ge 1\) and any fixed \(T<\infty \), the series

$$\begin{aligned} H(\iota ,p,t) :=\sum _{n\ge 0} \left[ h_n(t)\right] ^{\frac{1}{p}} \end{aligned}$$
(4.3)

converges uniformly in \(t\in [0,T]\).

Proof

By the same argument as in the proof of [4, Lemma 2.5], we get, for any \(\beta >0\), that

$$\begin{aligned} \int _0^\infty e^{-\beta t}h_n(t) dt = \frac{1}{\beta }\left( \iota \int _0^\infty e^{-\beta t} I(t)dt\right) ^n = \frac{1}{\beta }\left[ \iota \Upsilon (\beta )\right] ^n. \end{aligned}$$

By choosing \(\beta \) large enough, we have \(\iota \Upsilon (\beta )\le 1/2\) and, as in [4], by choosing the smallest such \(\beta \) this gives us \(H(\iota ,1,t) \le \exp (Ct)\) for some constant C depending on \(\Upsilon \) and \(\iota \). Similarly, for the general case \(p>1\) we may apply Hölder inequality to get

$$\begin{aligned} \int _0^\infty e^{-\beta t}h^{1/p}_n(t) dt \le \beta ^{-\frac{1}{q}} \left( \int _0^\infty e^{-\beta t}h_n(t) dt\right) ^{\frac{1}{p}} = \frac{1}{\beta }\left[ \iota \Upsilon (\beta )\right] ^{\frac{n}{p}}, \end{aligned}$$

where \(\frac{1}{p} + \frac{1}{q}=1\). Hence, similar arguments show that \(H(\iota ,p,t)\le \exp (Ct)\) and, in particular, that the series in (4.3) converges. \(\square \)

Equipped with Lemma 4.1, we are now able to prove Theorem 2.1.

Proof of Theorem 2.1

Define the standard Picard iterations by setting \(u_{0}(t,x)=1\) and, for \(n\ge 1\),

$$\begin{aligned} u_{n+1} (t,x)= & {} u_{0}(t,x)+\int _{0} ^ {t}\int _{{\mathbb {R}}^{d}} G_{\alpha } (t-s,x -y ) \sigma (u_{n}(s, y))W (ds, dy),\\ t\ge & {} 0, x\in {\mathbb {R}}^{d}. \end{aligned}$$

By induction, we can easily show that for every \(n\ge 0\), \(u_{n}(t, x)\) is well-defined and, for every \(p\ge 2\) and \(\beta >0\), we have

$$\begin{aligned} \sup _{t\in [0, T]} \sup _{x\in {\mathbb {R}}^{d}} {\mathbb {E}}\left[ e^{-p\beta t}\left| u_{n}(t,x) \right| ^p\right] <\infty . \end{aligned}$$
(4.4)

This in turn shows that

$$\begin{aligned} \sup _{t\in [0, T]} \sup _{x\in {\mathbb {R}}^{d}} {\mathbb {E}}\left[ \left| u_{n}(t,x) \right| ^ {p} \right] <\infty . \end{aligned}$$
(4.5)

To see (4.4), we first observe that it is clearly true for \(n=0\). Suppose now that it holds for some n. We have

$$\begin{aligned} e^{-\beta t} u_{n+1} (t,x)=e^{-\beta t} u_{0}(t,x)+\int _{0} ^ {t}e^{-\beta t}\int _{{\mathbb {R}}^{d}} G_{\alpha } (t-s,x -y ) \sigma (u_{n}(s, y))W (ds, dy) \end{aligned}$$

and, for every \(p\ge 2\), by using (3.3) and (3.4),

$$\begin{aligned} {\mathbb {E}}\left[ e^{-p\beta t}\left| u_{n+1}(t,x)\right| ^p \right]\le & {} C\left( 1+ \Big \Vert \int _{0} ^{t} e^{-\beta t}\int _{{\mathbb {R}}^{d}} \int _{{\mathbb {R}}^{d}}G_{\alpha } (t-s,x -y )\right. \\&G_{\alpha } (t-s,x -y' )\\&\left. \times \sigma (u_{n}(s, y)) \sigma (u_{n}(s, y')) \gamma (y-y ')dy'dy\Big \Vert _{\frac{p}{2}}^{\frac{p}{2}}\right) \\\le & {} C \left[ 1+ \left( \int _{0} ^{t} e^{-\beta (t-s)}\int _{{\mathbb {R}}^{d}} \int _{{\mathbb {R}}^{d}}G_{\alpha } (t-s,x -y )\right. \right. \\&G_{\alpha } (t-s,x -y' )\\&\left. \left. \times e^{-\beta s}\left| \left| \sigma (u_{n}(s, y)) \sigma (u_{n}(s, y'))\right| \right| _{\frac{p}{2}}\gamma (y-y ')dy'dy\right) \right] ^{\frac{p}{2}}. \end{aligned}$$

By using the Lipschitz assumption on \(\sigma \) and the induction hypothesis we get

$$\begin{aligned} {\mathbb {E}}\left[ e^{-p\beta s}\vert \sigma (u_{n}(s,y)) \vert ^p\right] \le C\left( 1+ \sup _{s\in [0,T],y\in {\mathbb {R}}^d} {\mathbb {E}}\left[ e^{-p\beta s}\vert u_{n}(s, y)\vert ^p \right] \right) <\infty . \end{aligned}$$

Hence

$$\begin{aligned} \sup _{s\in [0,T]}\sup _{y,y'\in {\mathbb {R}}^d} e^{-\beta s}\left| \left| \sigma (u_{n}(s, y)) \sigma (u_{n}(s, y'))\right| \right| _{\frac{p}{2}} < \infty \end{aligned}$$

and we obtain

$$\begin{aligned} {\mathbb {E}}\left[ e^{-p\beta t}\left| u_{n+1}(t,x)\right| ^p\right]&\le C \left( 1+ \int _{0}^{t} e^{-\beta (t-s)}I (t-s) ds \right) ^{\frac{p}{2}} \\&\le C \left( 1+ \int _0^T e^{-\beta s}I(s)ds\right) ^{\frac{p}{2}} < \infty . \end{aligned}$$

Applying similar arguments together with Hölder’s inequality for

$$\begin{aligned} H_{n}(t):= \sup _{x \in {\mathbb {R}}^{d}} {\mathbb {E}}\left[ \left| u_{n+1} (t, x)- u_{n}(t,x) \right| ^ {p}\right] \end{aligned}$$

gives us

$$\begin{aligned} H_n(t)&\le C \left[ \int _{0} ^ {t} \int _{{\mathbb {R}}^{d} } \int _{{\mathbb {R}}^{d} } G_{\alpha } (t-s,x -y )G_{\alpha } (t-s,x -y' )\right. \\&\quad \left| \left| \sigma (u_{n}(s, y))- \sigma (u_{n-1}(s, y))\right| \right| _{p}\\&\quad \times \left. \left| \left| \sigma (u_{n}(s, y'))- \sigma (u_{n-1}(s, y'))\right| \right| _{p} \gamma (y-y')dy'dyds\right] ^ {\frac{p}{2}}\\&\le C \int _{0} ^ {t} \int _{{\mathbb {R}}^{d} } \int _{{\mathbb {R}}^{d} } G_{\alpha } (t-s,x -y )G_{\alpha } (t-s,x -y' ) H_{n-1}(s) \gamma (y-y')dy'dyds\\&\le C \int _{0} ^{t} I(t-s) H_{n-1}(s)ds. \end{aligned}$$

By standard arguments, it suffices to consider the case of an equality. In this case, it follows from Lemma 4.1 that \(\sum _{n\ge 1} H_{n} (t)^ {\frac{1}{p}}\) converges uniformly on [0, T]. Consequently, the sequence \(u_{n}\) converges in \(L^ {p}(\Omega ) \), uniformly on \([0, T] \times {\mathbb {R}}^{d} \), and its limit satisfies (2.3). The uniqueness follows in a similar way, and the stationarity of the solution with respect to the space variable is a consequence of the proof of Lemma 18 in [6].

\(\square \)

5 Proofs of Theorems 2.3 and 2.4

In this section we prove Theorems 2.3 and 2.4. The key ingredient for the proofs is to bound the Malliavin derivative of the solution to (1.1) by a quantity involving the Green kernel associated to the fractional operator (2.2). Once a suitable bound is established, it suffices to study the asymptotic variance and follow the ideas presented in [12, 13]. We divide this section into four subsections. In the first one we study the (bound for the) Malliavin derivative of the solution, and in the second we study the correct normalization rate. The last two subsections are devoted to the proofs of Theorem 2.3 and Theorem 2.4.

5.1 Bound for the Malliavin derivative

We begin by providing a linear equation for the Malliavin derivative of the solution. The claim follows from (2.3), and the proof is rather standard. For this reason we omit the details.

Proposition 5.1

Let u be the mild solution to (1.1). Then for every \(t\in (0, T]\), \(p\ge 2\) and \(x \in {\mathbb {R}}^d \), the random variable u(tx) belongs to the Sobolev space \({\mathbb {D}}^{1,p}\) and its Malliavin derivative satisfies

$$\begin{aligned} D_{r, z }u(t, x)= & {} G_{\alpha } (t-r, x-z) \sigma (u(r, z))\nonumber \\&+ \int _{r} ^ {t}\int _{{\mathbb {R}} } G_{\alpha } (t-s, x-y) \Sigma (s, y) D_{r, z } u(s, y) W(ds, dy), \end{aligned}$$
(5.1)

where \(\Sigma (r, z) \) is an adapted and bounded (uniformly with respect to r and z) stochastic process that coincides with \(\sigma ' (u(r, z)) \) whenever \(\sigma \) is differentiable.

The following result provides a bound for the p-norm of the Malliavin derivative of the solution.

Proposition 5.2

Suppose that \(\gamma \) satisfies Assumption 2.2 and recall (see (3.14)) that \(\kappa = \frac{2d}{\alpha }\left( 1-\frac{1}{2q}\right) \), where q is from Proposition 3.2. Then for every \(0<s<t<T\), for every \(x, y \in {\mathbb {R}}^d \), and for every \(p\ge 2\) we have

$$\begin{aligned} \Vert D_{s,y}u(t,x)\Vert _{p} \le c (t-s) ^{-\frac{\kappa }{2}} G^{\frac{1}{2q}} _{\alpha } (t-s, x-y). \end{aligned}$$

Proposition 5.2 is based on the following lemma which proof is postponed to the appendix.

Lemma 5.3

Suppose that \(\gamma \) satisfies Assumption 2.2 and assume that \(g: [0, T]\times {\mathbb {R}}^d\rightarrow {{\mathbb {R}}}\) is non-negative function satisfying, for every \(t\in [0, T] \) and \({x} \in {\mathbb {R}}^d \),

$$\begin{aligned} g(t,{x} ) ^ {2}\le & {} G_{\alpha } (t,{x}) ^ {2} + \int _{0} ^ {t} \int _{{\mathbb {R}}^d } \int _{{\mathbb {R}}^d } G_{\alpha }(t-s, {x}-{y} ) G_{\alpha }(t-s, {x}-{y}' ) \nonumber \\&\quad \times g(s, {y}) g(s, {y}' ) \gamma (y-y')dy'dyds . \end{aligned}$$
(5.2)

Then

$$\begin{aligned} g(t, {x}) \le c t ^{-\frac{\kappa }{2}}G _\alpha ^{\frac{1}{2q}} (t, {x}), \end{aligned}$$
(5.3)

where \(\kappa = \frac{2d}{\alpha }\left( 1-\frac{1}{2q}\right) \) and q is from Proposition 3.2.

Remark 8

By carefully examining the proof of Lemma 5.3, one observes that the statement remains valid as long as, for 2q determined through Proposition 3.2 and depending solely on the covariance \(\gamma \), we have

$$\begin{aligned} G_{\alpha }(t,x) \le Ct^{-\kappa q}G_\alpha (t,x) \end{aligned}$$

for some constant C and parameter \(\kappa < 1\), and \(G_\alpha \) satisfies the semigroup property (3.9). This encodes the required connection between the density \(G_\alpha \) of the associated stable process and the covariance \(\gamma \). Indeed, the above requirements means that improved integrability induced by the convolution with \(\gamma \) is sufficient to compensate low integrability (or scaling in the time variable t) of the kernel \(G_\alpha \). As such, we could consider more general Green kernels G in place of \(G_\alpha \) in Lemma 5.3. For example, G can be taken to be a density of more general Lévy process. In this case, we obtain Proposition 5.2 provided that G satisfies the above condition.

Proof of Proposition 5.2

In a standard way we can show that, for every \(t\in (0, T]\) and \(x \in {\mathbb {R}} \), the random variable u(tx) belongs to the Sobolev space \({\mathbb {D}}^{1,p}\) for all \(p\ge 2\) and its Malliavin derivative satisfies (5.1). Moreover, using the Burkholder-Davis-Gundy inequality (3.4) we obtain that, for any \(p\ge 2\),

$$\begin{aligned} \Vert D_{r, z}u(t,x) \Vert _{p} ^ {2}\le & {} C_{p} G_{{\alpha }} (t-r, x-z) ^ {2}\\&+ C_{p} \int _{r}^ {t} \int _{{\mathbb {R}}} \int _{{\mathbb {R}} } G_{\alpha } (t-s, x-y) G_{\alpha } (t-s, x- y') \\&\times \Vert D_{r, z} u(s, y) \Vert _{p} \Vert D_{r, z} u(s, y) \Vert _{p}\gamma (y-y ')dy'dyds. \end{aligned}$$

To conclude the proof, it suffices to apply Lemma 5.3 with \(\theta =t-r, \eta = x-z \), and

$$\begin{aligned} g(\theta , \eta ) = \Vert D_{r, z} u(\theta +r, \eta + z)\Vert _{p}. \end{aligned}$$

\(\square \)

For later use we also record the following simple technical fact.

Lemma 5.4

Suppose \(2q \in \left( 1,\frac{2d}{2d-\alpha } \wedge \frac{d+\alpha }{d}\right) \). Then

$$\begin{aligned} \int _{{\mathbb {R}}^d} G^{\frac{1}{2q}}_{ \alpha } (r-s, \eta ) d\eta = C (r-s) ^{\frac{\kappa }{2}}, \end{aligned}$$

where \(\kappa \) is defined in (3.14).

Proof

By the scaling property (3.10) we get

$$\begin{aligned} \int _{{\mathbb {R}}^d} G^{\frac{1}{2q}}_{ \alpha } (r-s, \eta ) d\eta = (r-s) ^{\frac{\kappa }{2}}\int _{{\mathbb {R}}^d} G^{\frac{1}{2q}}_{ \alpha } (1, \eta ) d\eta \end{aligned}$$

where, by (3.11),

$$\begin{aligned} \int _{{\mathbb {R}}^d} G^{\frac{1}{2q}}_{ \alpha } (1, \eta ) d\eta \le C \int _{{\mathbb {R}}^d} \left( 1+|\eta |\right) ^{-\frac{d+\alpha }{2q}}d\eta < \infty \end{aligned}$$

since \(\frac{d+\alpha }{2q}>d\). \(\square \)

5.2 Asymptotic behavior of the covariance

Let us use the following notation. For fixed \(t>0,\) we define

$$\begin{aligned} G_{R}(t):=\int _{B_R}\left[ u(t,x)-1\right] dx \text{ and } \varphi _{R} (t, y):=\int _{B_R} G_{\alpha } (t, x-y) dx . \end{aligned}$$
(5.4)

The constant \(k_\beta \), for \(\beta \le d\), is defined by

$$\begin{aligned} k_d = |B_1| \text{ and } k_{\beta }:=\int _{B_1^2} |x-x'|^{-\beta } dx dx',\quad \beta <d. \end{aligned}$$
(5.5)

Set

$$\begin{aligned} \Psi (s, z) = {\mathbb {E}}[\sigma (u(s, 0)) \sigma (u(s, z))] \end{aligned}$$
(5.6)

and

$$\begin{aligned} \theta _{\alpha } (s)={\mathbb {E}}[\sigma (u(s,y))]. \end{aligned}$$
(5.7)

When \(\beta =d\), we put

$$\begin{aligned} \nu _{\alpha }(s) = \left( \int _{{\mathbb {R}}^d} \Psi (s,z)d\mu (z)\right) ^{\frac{1}{2}} \end{aligned}$$
(5.8)

and following lemma justifies the fact that \(\nu _\alpha \) is well-defined. The proof is postponed to the end of this subsection.

Lemma 5.5

Suppose that \(\gamma \) satisfies Assumption 2.2 with \(\beta =d\) and let \(\Psi \) be given by (5.6). Then for every \(s\ge 0\) we have

$$\begin{aligned} \int _{{\mathbb {R}}^d} \Psi (s,z)d\mu (z) \ge 0. \end{aligned}$$

In particular, \(\nu _\alpha \) given by (5.8) is well-defined. Moreover, for every \(s\in [0,T]\) we have

$$\begin{aligned} \nu ^2_\alpha (s) \ge \theta _\alpha ^2(s). \end{aligned}$$

The following theorem provides us the correct renormalization as well as the limiting covariance.

Theorem 5.6

Suppose that \(\gamma \) satisfies Assumption 2.2. Then

$$\begin{aligned} \lim _{R\rightarrow \infty }R^{\beta -2d}{\mathbb {E}}(G_R(t) G_{R}(r)) = k_\beta \int _{0}^{t\wedge r} \left[ \mu ({\mathbb {R}}^d)\theta ^2_{\alpha }(s)\mathbf {1}_{\beta <d} + \nu ^2_{\alpha }(s)\mathbf {1}_{\beta =d} \right] ds. \end{aligned}$$

Before we proceed to the proof of Theorem 5.6, we present a couple of technical lemmas.

Lemma 5.7

Suppose that \(\gamma \) satisfies Assumption 2.2. Then for any bounded function \(s\mapsto \theta (s)\) we have, as \(R\rightarrow \infty \),

$$\begin{aligned}&R^{\beta -2d}\int _0^t \theta (s) \int _{{\mathbb {R}}^{2d}} \varphi _R(t-s,y)\varphi _R(t-s,y')\gamma (y-y')dy'dyds \rightarrow k_{\beta }\mu ({\mathbb {R}}^d)\\&\quad \int _0^t \theta (s)ds, \end{aligned}$$

where \(k_\beta \) is defined in (5.5).

Proof

Recall that, writing formally by (3.2), we have

$$\begin{aligned}&\int _{{\mathbb {R}}^{2d}} \varphi _R(t-s,y)\varphi _R(t-s,y')\gamma (y-y')dy'dy \\&\quad = \int _{{\mathbb {R}}^{d}} \varphi _R(t-s,y)\left[ \varphi _R(t-s,\bullet ) *I_{d-\beta } *\mu \right] (y)dy. \end{aligned}$$

Since clearly \(\varphi _R(t-s,\bullet ) \in L^1({\mathbb {R}}^d) \cap L^{\infty }({\mathbb {R}}^d)\), it follows from Young’s inequality (2.7) and Hardy-Littlewood-Sobolev’s inequality (3.15) that \(\varphi _R(t-s,\bullet ) *I_{d-\beta } *\mu \in L^2({\mathbb {R}}^d)\). Hence we obtain, by taking a Fourier transform and using Plancherel’s theorem, that

$$\begin{aligned}&\int _{{\mathbb {R}}^{d}} \varphi _R(t-s,y)\left[ \varphi _R(t-s,\bullet ) *I_{d-\beta } *\mu \right] (y)dy \\&\qquad = \frac{c_{d,\beta } }{(2\pi )^d} \int _{{\mathbb {R}}^d} \left| [{\widehat{\varphi }}_R(t-s,\bullet )](\xi )\right| ^2 |\xi |^{\beta -d}{\widehat{\mu }}(\xi )d\xi , \end{aligned}$$

where \(c_{d,\beta } =1\) for \(\beta =d\). By recalling that

$$\begin{aligned} \left| \int _{B_R} e^{-i\langle x,\xi \rangle }dx\right| ^2 = (2\pi R)^d|\xi |^{-d}J_{\frac{d}{2}}^2(R|\xi |), \end{aligned}$$

where \(J_{\frac{d}{2}}\) denotes the Bessel function of the first kind of order d/2, we obtain

$$\begin{aligned} \left| [{\widehat{\varphi }}_R(t-s,\bullet )](\xi )\right| ^2 = (2\pi R)^d|\xi |^{-d}J_{\frac{d}{2}}^2(R|\xi |) e^{-2(t-s)|\xi |^{\alpha }} \end{aligned}$$

leading to

$$\begin{aligned}&\frac{c_{d,\beta }}{(2\pi )^d}\int _{{\mathbb {R}}^d} \left| [{\widehat{\varphi }}_R(t-s,\bullet )](\xi )\right| ^2 |\xi |^{\beta -d}{\widehat{\mu }}(\xi )d\xi \\&\quad = c_{d,\beta } \int _{{\mathbb {R}}^d} R^d|\xi |^{-d}J_{\frac{d}{2}}^2(R|\xi |) e^{-2(t-s)|\xi |^{\alpha }} |\xi |^{\beta -d}{\widehat{\mu }}(\xi )d\xi \\&\quad = c_{d,\beta } R^{2d-\beta }\int _{{\mathbb {R}}^d} |\xi |^{-d}J_{\frac{d}{2}}^2(|\xi |) e^{-2(t-s)R^{-\alpha }|\xi |^{\alpha }} |\xi |^{\beta -d}{\widehat{\mu }}\left( \frac{\xi }{R}\right) d\xi . \end{aligned}$$

Since \({\widehat{\mu }} \in L^{\infty }({\mathbb {R}}^d)\), we have \(\sup _{R>0}e^{-2(t-s)R^{-\alpha }|\xi |^{\alpha }}{\widehat{\mu }}\left( \frac{\xi }{R}\right) < \infty \). Moreover, since \(J_{\frac{d}{2}}^2(|\xi |) = O(|\xi |^{{-1}})\) as \(|\xi |\rightarrow \infty \) and \(J_{\frac{d}{2}}^2(|\xi |) \sim c_d|\xi |^d\) as \(|\xi |\rightarrow 0\) (here we have used standard Landau notation \(O(|\xi |)\) and \(f\sim g\) if \(\frac{f}{g} \rightarrow 1\)), we have \( \int _{{\mathbb {R}}^d} |\xi |^{\beta -2d}J_{\frac{d}{2}}^2(|\xi |)d\xi < \infty . \) This, together with the boundedness of \(\theta (s)\), allows us to use the dominated convergence theorem and therefore, as \(R\rightarrow \infty \),

$$\begin{aligned}&R^{\beta -2d}\int _0^t \theta (s) \int _{{\mathbb {R}}^{2d}} \varphi _R(t-s,y)\varphi _R(t-s,y')\gamma (y-y')dy'dyds \\&\quad \rightarrow c_{d,\beta } \int _0^t \theta (s) ds \int _{{\mathbb {R}}^d} |\xi |^{\beta -2d}J_{\frac{d}{2}}^2(|\xi |) {\widehat{\mu }}(0)d\xi . \end{aligned}$$

The result now follows from \({\widehat{\mu }}(0) = \mu ({\mathbb {R}}^d)\) together with the fact that

$$\begin{aligned} c_{d,\beta } \int _{{\mathbb {R}}^d} |\xi |^{\beta -2d}J_{\frac{d}{2}}^2(|\xi |) d\xi = \int _{B_1^2} |x_1-x_2|^{-\beta } dx_1dx_2 \end{aligned}$$
(5.9)

for \(\beta <d\) and, for \(\beta =d\), we have

$$\begin{aligned} \int _{{\mathbb {R}}^d} |\xi |^{-d}J_{\frac{d}{2}}^2(|\xi |) d\xi = |B_1|. \end{aligned}$$
(5.10)

Indeed, the validity of (5.10) can be seen from

$$\begin{aligned} \int _{{\mathbb {R}}^d}{} \mathbf{1} _{B_1}(x) dx = \int _{{\mathbb {R}}^d}\mathbf {1}^2_{B_1}(x) dx = \frac{1}{(2\pi )^d} \int _{{\mathbb {R}}^d} \left| \widehat{\mathbf {1}_{B_1}}(\xi )\right| ^2d\xi = \int _{{\mathbb {R}}^d} |\xi |^{-d}J_{\frac{d}{2}}^2(|\xi |) d\xi \end{aligned}$$

while the validity of (5.9) can be seen from

$$\begin{aligned} \int _{B_1^2}|x_1-x_2|^{-\beta } dx_1dx_2= & {} \int _{{\mathbb {R}}^{d}} \mathbf {1}_{B_1}(x_1) \left[ I_{d-\beta }\mathbf {1}_{B_1}\right] (x_1) dx_1 \\= & {} \frac{c_{d,\beta }}{ (2\pi )^{d}} \int _{{\mathbb {R}}^d} \left| \widehat{\mathbf {1}_{B_1}}(\xi )\right| ^2|\xi |^{\beta -d}d\xi \\= & {} c_{d,\beta }\int _{{\mathbb {R}}^d} |\xi |^{\beta -2d}J_{\frac{d}{2}}^2(|\xi |) d\xi . \end{aligned}$$

This completes the proof. \(\square \)

Lemma 5.8

Suppose that \(\gamma \) satisfies Assumption 2.2 and \(\beta <d\). Then

$$\begin{aligned} \lim _{\vert z\vert \rightarrow \infty }\sup _{0\le s\le t} \left| \Psi (s, z) -\theta ^2_{\alpha } (s) \right| =0. \end{aligned}$$

Proof

As in the proof of Theorem 3.1 in [13], we can write, via the Clark-Ocone formula,

$$\begin{aligned} \Psi (s, y- y' ) - \theta ^2_{\alpha } (s)= T(s, y, y' ), \end{aligned}$$

where

$$\begin{aligned} \vert T(s, y, y' )\vert \le C \int _{0} ^ {s} \int _{{\mathbb {R}}^{2d} } \Vert D_{r, z} u(s, y) \Vert _{2} \Vert D_{r, z'} u(s, y') \Vert _{2}\gamma (z-z')dz'dzdr. \end{aligned}$$

Hence, by applying Proposition 5.2, we obtain the estimate

$$\begin{aligned} \vert T(s, y, y' )\vert\le & {} C \int _{0} ^ {s}(s-r) ^{-\kappa } \int _{{\mathbb {R}}^{2d} } G_\alpha ^{\frac{1}{2q}} (s-r, y- z) G_\alpha ^{\frac{1}{2q}} (s-r, y'- z')\\&\times \gamma (z-z')dz'dzdr = : T_1(s,y,y'). \end{aligned}$$

We prove the claim by an argument based on uniform integrability. We know that \(\gamma = K_{d-\beta } *\mu \). Therefore,

$$\begin{aligned} T_1(s, y, y' )= & {} C \int _{0} ^ {s}(s-r) ^{-\kappa } \int _{ {\mathbb {R}}^{3d} } G_\alpha ^{\frac{1}{2q}} (s-r, y- z) G_\alpha ^{\frac{1}{2q}} ( s-r, y'- z')\\&\times \vert z- z'-w\vert ^{- \beta }dz'dz d\mu (w)dr, \end{aligned}$$

where \(2q= \frac{2d}{2d-\beta }\). Making the change of variables \(u=s-r\), \(\xi =y-z\) and \(\xi '=y-z'\), we can write

$$\begin{aligned} T_1(s, y, y' )= & {} C \int _{0} ^ {s} u ^{-\kappa } \int _{ {\mathbb {R}}^{3d} } G_\alpha ^{\frac{1}{2q}} (u, \xi ) G_\alpha ^{\frac{1}{2q}} ( u, \xi ')\\&\times \vert y-y'-\xi -\xi ' -w\vert ^{- \beta }d\xi 'd\xi d\mu (w)du. \end{aligned}$$

For any fixed \(\xi , \xi ' ,w \in {\mathbb {R}}^d\), clearly, \(\vert y-y'-\xi -\xi ' -w\vert ^{- \beta }\) tends to zero as \(|y-y'|\) tends to infinity. Taking into account that

$$\begin{aligned} \int _{0} ^ {s} u ^{-\kappa } \int _{ {\mathbb {R}}^{3d} } G_\alpha ^{\frac{1}{2q}} (u, \xi ) G_\alpha ^{\frac{1}{2q}} ( u, \xi ') d\xi 'd\xi d\mu (w)du <\infty , \end{aligned}$$

to show that \(\lim _{|y-y'| \rightarrow \infty } T_1(s, y, y' ) =0\), it suffices to check that

$$\begin{aligned} I:= \int _{0} ^ {s} u ^{-\kappa } \int _{{\mathbb {R}}^{3d} } G_\alpha ^{\frac{1}{2q}} (u, \xi ) G_\alpha ^{\frac{1}{2q}} ( u, \xi ') \vert y-y'-\xi -\xi ' -w\vert ^{- \beta '}d\xi 'd\xi d\mu (w)du<\infty \end{aligned}$$

for some \(\beta '>\beta \). Making a change of variables, we can write

$$\begin{aligned} I= \int _{0} ^ {s} u ^{-\kappa } \int _{{\mathbb {R}}^{3d} } G_\alpha ^{\frac{1}{2q}} (u, y-z) G_\alpha ^{\frac{1}{2q}} ( u, y'-z') \vert z-z' -w\vert ^{- \beta '}dz'dz d\mu (w)du. \end{aligned}$$

Appying Hölder’s and Hardy-Littlewood-Sobolev’s inequality (3.15) yields

$$\begin{aligned} I \le C \int _{0} ^ {s} u ^{-\kappa } du \left( \int _{{\mathbb {R}}^d} G_\alpha ^{\frac{2d-\beta }{2d-\beta '}} (u,x) dx \right) ^{\frac{2d-\beta '}{d}}, \end{aligned}$$

which is finite since \(\beta '\) is close to \(\beta \). This concludes the proof. \(\square \)

Proof of Theorem 5.6

For notational simplicity, we only consider the case \(r=t\) while the case of general \(t,r\in [0,T]\) follows in a similar way. Using (2.3) and (5.4), we can write

$$\begin{aligned} G_R(t) = \int _0^t \varphi _R(t-s,y)\sigma (u(s,y))W(ds,dy). \end{aligned}$$

Hence, by (3.3), we get

$$\begin{aligned} {\mathbb {E}}[G^2_{R}(t) ]= & {} \int _{0} ^ {t} \int _{{\mathbb {R}}^{2d}} \varphi _{R}(t-s, y) \varphi _{R}(t-s, y' ) \Psi (s, y' -y)\gamma (y-y')dy'dyds. \end{aligned}$$

Let us begin with the case \(\beta <d\). In view of Lemma 5.7 together with the boundedness of \(\theta ^2_{\alpha } (s)\), it suffices to show that

$$\begin{aligned} T_{R}:= & {} R^{\beta - 2d} \int _0^t \int _{{\mathbb {R}}^{2d}} \left[ \Psi (s, y-y ')-\theta ^2_{\alpha } (s) \right] \varphi _{R} (t-s, y) \varphi _{R} (t-s, y') \nonumber \\&\quad \times \gamma (y-y')dy'dyds \rightarrow 0. \end{aligned}$$
(5.11)

Now by Lemma 5.8 we know that for every \(\varepsilon >0\) there exists \(K>0\) such that, for every \(s\in [0, t]\) and every \(y, y'\) with \( \vert y- y' \vert \ge K\),

$$\begin{aligned} \left| \Psi (s, y-y') -\theta ^2_{\alpha } (s) \right| \le \varepsilon . \end{aligned}$$
(5.12)

By using \(\gamma = I_{d-\beta } *\mu \), we split \( T_{R}= T_{R,1}+ T_{R,2}, \) where

$$\begin{aligned} T_{R,1}= & {} R^{\beta -2d} \int _{0}^{t} \int _{{\mathbb {R}}^{3d} } \varphi _{R} (t-s, y) \varphi _{R} (t-s, y') \left[ \Psi (s, y-y ')- \theta ^2 _{\alpha }(s) \right] \\&\qquad \times \vert y-y'-w\vert ^{-\beta } 1_{\vert y-y '\vert \le K}d\mu (w) dy'dyds \end{aligned}$$

and

$$\begin{aligned} T_{R, 2}= & {} R^{\beta -2d} \int _{0}^{t} \int _{{\mathbb {R}}^{3d}} \varphi _{R} (t-s, y) \varphi _{R} (t-s, y') \left[ \Psi (s, y-y ')- \theta ^2_{\alpha } (s) \right] \\&\quad \times \vert y-y'-w\vert ^{-\beta }1_{\vert y-y '\vert \ge K}d\mu (w)dy'dyds. \end{aligned}$$

On the region \(|y'-y|\le K, 0\le s \le T\) the quantity \(\Psi (s, y-y ')- \theta ^2_{\alpha } (s)\) is uniformly bounded. Using also the semigroup property and (3.8) allows us to estimate

$$\begin{aligned} T_{R,1}\le & {} CR ^{\beta -2d} \int _{0}^{t} \int _{{\mathbb {R}}^{3d}} \int _{B_R^2} G_{ \alpha } (t-s, x- y) G_{ \alpha } (t-s, x- y') \\&\qquad \times \vert y-y '-w\vert ^{-\beta }1_{\vert y-y '\vert \le K} dx'dxdy'dyd\mu (w)ds\\= & {} CR ^{\beta -2d} \int _{0}^{t} \int _{{\mathbb {R}}^{2d}} \int _{B_R^2} G_{ \alpha } (2(t-s), x-x'- y') \vert y'-w\vert ^{-\beta }\\&\qquad \times 1_{\vert y '\vert \le K}dx'dxdy'd\mu (w)ds\\\le & {} C R ^{\beta -d}\int _{{\mathbb {R}}^{2d}} \vert y-w\vert ^{-\beta }1_{\vert y \vert \le K}dyd\mu (w) \rightarrow 0 \end{aligned}$$

as \(R\rightarrow \infty \), since clearly here we have

$$\begin{aligned} \int _{{\mathbb {R}}^{2d}} \vert y-w\vert ^{-\beta }1_{\vert y \vert \le K}dyd\mu (w) < \infty . \end{aligned}$$

For the term \(T_{R,2}\), we apply (5.12) to get

$$\begin{aligned} T_{R, 2}\le & {} \varepsilon C_\alpha R ^{\beta -2d} \int _{0}^{t} \int _{{\mathbb {R}}^{3d}} \int _{B_R^2} G_{ \alpha } (t-s, x- y) G_{ \alpha } (t-s, x'- y') \\&\times \vert y- y '-w\vert ^{-\beta }dx'dxdy'dyd\mu (w)ds. \end{aligned}$$

The change of variables \(x-y= \theta \), \(x'-y' =\theta '\), \(x_1=R\xi _1\) and \(x'= R\xi '\) yields

$$\begin{aligned} T_{R, 2}&\le \varepsilon C_\alpha \int _{0}^{t} \int _{{\mathbb {R}}^{3d}} \int _{B_1^2} G_{ \alpha } (t-s, \theta ) G_{ \alpha } (t-s, \theta ') \\&\qquad \times \vert \xi - \xi ' - R^{-1}\theta +R^{-1}\theta ' -w\vert ^{-\beta }d\xi 'd \xi d\theta d\theta ' d\mu (w)ds, \end{aligned}$$

which is bounded by \(C\varepsilon \) because \(\sup _{z\in {\mathbb {R}}^d} \int _{B_1} |y-z| ^{-\beta } dy <\infty \). Since \(\varepsilon >0\) is arbitrary, the desired limit (5.11) follows. This verifies the claim for the case \(\beta < d\).

Let next \(\beta =d\). Since for a fixed \(s>0\), the function \(y \mapsto \Psi (s,y)\) is a bounded function and now \(\gamma = \mu \) is a finite measure, we may regard \({\tilde{\gamma }}_s(dy) = \Psi (s,y)\gamma (dy)\) as a signed measure. Considering positive and negative parts separately, we may use exactly the same arguments as in the proof of Lemma 5.7 and get

$$\begin{aligned} R^{-d}{\mathbb {E}}[G^2_{R}(t) ]= & {} R^{-d}\int _{0} ^ {t} \int _{{\mathbb {R}}^{2d}} \varphi _{R}(t-s, y) \varphi _{R}(t-s, y' ) \Psi (s, y' -y)\\&\gamma (y-y')dy'dyds \\\rightarrow & {} \int _0^t \widehat{{\tilde{\gamma }}_s}(0)\int _{{\mathbb {R}}^d} |\xi |^{-d}J_{\frac{d}{2}}^2(|\xi |) d\xi ds, \end{aligned}$$

where now

$$\begin{aligned} \widehat{{\tilde{\gamma }}_s}(0) = \int _{{\mathbb {R}}^d} \Psi (s,z)d\gamma (z) = \nu _\alpha ^2(s). \end{aligned}$$

This verifies the claim for \(\beta =d\) as well, and hence the proof is completed. \(\square \)

We end this subsection by proving Lemma 5.5.

Proof of Lemma 5.5

Denote

$$\begin{aligned} T(s,y) := \Psi (s,y) - \theta ^2_\alpha (s). \end{aligned}$$

Since T(sy) is also a bounded function, we may follow the proofs of Theorem 5.6 and Lemma 5.7 to obtain

$$\begin{aligned}&R^{-d}\int _{0} ^ {t} \int _{{\mathbb {R}}^{2d}} \varphi _{R}(t-s, y) \varphi _{R}(t-s, y' ) \Psi (s, y' -y)\gamma (y-y')dy'dyds \\&\quad = R^{-d}\int _{0} ^ {t} \int _{{\mathbb {R}}^{2d}} \varphi _{R}(t-s, y) \varphi _{R}(t-s, y' ) T(s, y' -y)\gamma (y-y')dy'dyds \\&\qquad + R^{-d}\int _{0} ^ {t} \theta ^2_\alpha (s)\int _{{\mathbb {R}}^{2d}} \varphi _{R}(t-s, y) \varphi _{R}(t-s, y' ) \gamma (y-y')dy'dyds, \end{aligned}$$

where now, as \(R\rightarrow \infty \),

$$\begin{aligned}&R^{-d}\int _{0} ^ {t} \theta ^2_\alpha (s)\int _{{\mathbb {R}}^{2d}} \varphi _{R}(t-s, y) \varphi _{R}(t-s, y' ) \gamma (y-y')dy'dyds\\&\quad \rightarrow \mu \left( {\mathbb {R}}^d\right) |B_1|\int _0^t \theta ^2_\alpha (s) ds \end{aligned}$$

and

$$\begin{aligned}&R^{-d}\int _{0} ^ {t} \int _{{\mathbb {R}}^{2d}} \varphi _{R}(t-s, y) \varphi _{R}(t-s, y' ) T(s, y' -y)\gamma (y-y')dy'dyds \\&\quad \rightarrow |B_1| \int _0^t \widehat{T(s,\bullet )\gamma (\bullet )}(0) ds. \end{aligned}$$

By the very definition, we have

$$\begin{aligned} T(s,y) := \Psi (s,y) - \theta ^2_\alpha (s) = \text {Cov}\left[ \sigma (u(s,y))\sigma (u(s,0))\right] \end{aligned}$$

and since T(sy) and \(\gamma (y)\) are both covariances, they are positive semidefinite. Consequently, the product \(T(s,y)\gamma (y)\) is again a covariance. It follows that

$$\begin{aligned} {\widehat{T(s,\bullet )\gamma (\bullet )}}(\xi ) \ge 0 \end{aligned}$$

for all \(\xi \in {\mathbb {R}}^d\) and, in particular, for \(\xi = 0\). Now

$$\begin{aligned} \int _0^t \int _{{\mathbb {R}}^d} \Psi (s,z)d\mu (z)dt= & {} \int _0^t \nu _\alpha ^2(s)ds \\= & {} \int _0^t \widehat{T(s,\bullet )\gamma (\bullet )}(0)ds + \int _0^t \theta _\alpha ^2(s)ds. \end{aligned}$$

The claim follows from this together with the observations \(\Psi (0,z)= \theta _\alpha ^2(0)\) for all \(z\in {\mathbb {R}}^d\) and \(\widehat{T(s,\bullet )\gamma (\bullet )}(0) \ge 0\) for all \(s\ge 0\). \(\square \)

5.3 Proof of Theorem 2.3

We start with the following result that we will utilise in the case \(\beta <d\).

Lemma 5.9

Suppose that \(0<\beta<\alpha < 2\wedge d\). For every \( t>0\) we have

$$\begin{aligned} \int _{{\mathbb {R}}^d} G_{{\alpha } } (t, x-y) \vert y\vert ^{-\beta } dy \le C_{\beta ,\alpha } |x| ^{-\beta }. \end{aligned}$$
(5.13)

Proof

Using the estimate (3.12), we have

$$\begin{aligned}&\int _{{\mathbb {R}}^d} G_{{\alpha } } (t, x-y) \vert y\vert ^{-\beta } dy \\&\quad \le C \int _{{\mathbb {R}}^d} \frac{ t^{- \frac{d}{\alpha }} }{ 1+ | (x-y) t^{-\frac{1}{\alpha }} |^{\alpha +d} } | y| ^{-\beta } dy \\&\quad = C \int _{|y|< \frac{ |x|}{2} } \frac{ t^{- \frac{d}{\alpha }} }{ 1+ | (x-y) t^{-\frac{1}{\alpha }} |^{\alpha +d} } | y| ^{-\beta } dy\\&\qquad + C \int _{|y| \ge \frac{ |x|}{2}} \frac{ t^{- \frac{d}{\alpha }} }{ 1+ | (x-y) t^{-\frac{1}{\alpha }} |^{\alpha +d} } | y| ^{-\beta } dy \\&\quad \le C \int _{|y| < \frac{ |x|}{2} } \frac{ t^{- \frac{d}{\alpha }} }{ 1+ | x t^{-\frac{1}{\alpha }} |^{\alpha +d} } | y| ^{-\beta } dy + C | x| ^{-\beta } \\&\qquad \int _{|y| \ge \frac{ |x|}{2}} \frac{ t^{- \frac{d}{\alpha }} }{ 1+ | (x-y) t^{-\frac{1}{\alpha }} |^{\alpha +d} } dy \\&\quad \le C \frac{ t^{- \frac{d}{\alpha }} }{ 1+ | x t^{-\frac{1}{\alpha }} |^{\alpha +d} } | x| ^{-\beta +d} + C | x| ^{-\beta } . \end{aligned}$$

The estimate (5.13) follows from this, because one can show that

$$\begin{aligned} \sup _{x\in {\mathbb {R}}^d} \sup _{t>0} \frac{ t^{- \frac{d}{\alpha }} |x|^d }{ 1+ | x t^{-\frac{1}{\alpha }} |^{\alpha +d} } <\infty . \end{aligned}$$

\(\square \)

Proof of Theorem 2.3

Let \(\varphi _R\) be given by (5.4). By the same arguments as in the proof of [13, Theorem 1.1] (see pp. 7178-7180), using Proposition 3.1, Theorem 5.6 and Proposition 5.1, we get \(d_{TV}(F_R,Z)\le 2(A_1+A_2)\), where

$$\begin{aligned} A_1\le & {} CR^{\beta -2d}\int _0^t\left( \int _{0}^s (s-r) ^ {-\kappa }\int _{{\mathbb {R}}^{6d}} \varphi _{R} (t-s, y)\varphi _{R} (t-s, y')\varphi _{R} (t-s, {\tilde{y}}) \right. \nonumber \\&\times \varphi _{R} (t-s, {\tilde{y}}') G^{\frac{1}{2q}}_{ \alpha } (s-r, y- z) G^{\frac{1}{2q}}_{ \alpha } (s-r,{\tilde{y}}- z') \gamma (y-y')\nonumber \\&\left. \times \gamma ({\tilde{y}}-{{\tilde{y}}}')\gamma ( z- z')d yd y'd{{\tilde{y}}}d{\tilde{y}}'d zd z'dr \right) ^{1/2}ds \end{aligned}$$
(5.14)

and

$$\begin{aligned} A_{2}\le & {} C R^{\beta -2d}\int _{0} ^{t} \bigg ( \int _{s}^{t} (r-s) ^{-\kappa }\int _{ {\mathbb {R}}^{6d}} \varphi _{R} (t-r,z) \varphi _{R} (t-r, {\tilde{z}}) \varphi _{R} (t-s,y') \\&\times \varphi _{R} (t-s, {\tilde{y}} ') G^{\frac{1}{2q}}_{ \alpha } (r-s, z- y) G^{\frac{1}{2q}}_{ \alpha } (r-s,{\tilde{z}}- {\tilde{y}} ) \\&\times \gamma ( y- y')\gamma ({\tilde{y}}-{\tilde{y}}')\gamma ( z-\tilde{z})d y d y'd{\tilde{yd}}{\tilde{y}}'d zd{\tilde{zdr}}\bigg )^{1/2}ds. \end{aligned}$$

We begin with the case \(\beta =d\) that is simpler. For the term \(A_1\) in this case, we use the trivial bound \(\varphi _{R} (t-s, y')\varphi _{R} (t-s, {\tilde{y}})\varphi _{R} (t-s, {\tilde{y}}') \le 1\), integrate in the variables \(y'\) and \({\tilde{y}}'\), perform the change of variables \(y\mapsto y-z\) and \({\tilde{y}} \mapsto {\tilde{y}}-z\) in the integrals with respect to \(y,{\tilde{y}}\), and then integrate with respect to \(z'\), z, and finally with respect to y and \({\tilde{y}}'\). Together with Lemma 5.4, this leads to

$$\begin{aligned} A_1\le & {} CR^{-d}\int _0^t\left( \int _{0}^s (s-r) ^ {-\kappa }\int _{{\mathbb {R}}^{4d}} \varphi _{R} (t-s, y)G^{\frac{1}{2q}}_{ \alpha } (s-r, y- z) \right. \nonumber \\&\left. \times G^{\frac{1}{2q}}_{ \alpha } (s-r,{\tilde{y}}- z') \gamma ( z- z')d yd{{\tilde{y}}}d zd z'dr \right) ^{1/2}ds\nonumber \\= & {} CR^{-d}\int _0^t\left( \int _{0}^s (s-r) ^ {-\kappa }\int _{{\mathbb {R}}^{4d}} \varphi _{R} (t-s, y+z)G^{\frac{1}{2q}}_{ \alpha } (s-r, y) \right. \nonumber \\&\left. \times G^{\frac{1}{2q}}_{ \alpha } (s-r,{\tilde{y}}) \gamma ( z- z')d zd z'd yd{{\tilde{y}}}dr \right) ^{1/2}ds\nonumber \\\le & {} CR^{-\frac{d}{2}}\int _0^t\left( \int _{0}^s (s-r) ^ {-\kappa }\int _{{{\mathbb {R}}^{2d}}} G^{\frac{1}{2q}}_{ \alpha } (s-r, y) G^{\frac{1}{2q}}_{ \alpha } (s-r,{\tilde{y}})d yd{{\tilde{y}}}dr \right) ^{1/2}ds\nonumber \\\le & {} CR^{-\frac{d}{2}}. \end{aligned}$$

Treating the term \(A_2\) with similar arguments completes the proof for the case \(\beta =d\). Suppose next \(\beta <d\) and let us again first treat the term \(A_1\). We can bound \(A_1\) as follows

$$\begin{aligned} A_1&\le C R^{\beta -2d} \int _0^t \Bigg ( \int _0^s (s-r)^{-\kappa } \int _{B_R^4} \int _{{{\mathbb {R}}^{9d}}} G_\alpha (t-s, x_1-y) G_\alpha (t-s, x_2-y') \\&\qquad \times G_\alpha (t-s, x_3- {\tilde{y}})G_\alpha (t-s, x_4-{\tilde{y}}') G^{\frac{1}{2q}}_{ \alpha } ({s-r}, z- y) G^{\frac{1}{2q}}_{ \alpha } ({s-r},{\tilde{z}}- {\tilde{y}} ) \\&\qquad \times \vert y- y'-w_1\vert ^{- \beta }\vert {\tilde{y}}-{\tilde{y}}'-w_2\vert ^{- \beta }\vert z-{\tilde{z}}-w_3\vert ^{- \beta }\\&\qquad \times d y d y'd{\tilde{yd}}{\tilde{y}}'d zd{\tilde{z}} d\mu (w_1)d\mu (w_2)d\mu (w_3){dx_1dx_2dx_3dx_4}dr\bigg )^{1/2}ds. \end{aligned}$$

The change of variables \(x_1-y=\theta _1\), \( x_2-y' =\theta _2\), \(x_3 -{\tilde{y}} =\theta _3\), \(x_4-{\tilde{y}}' =\theta _4\), \(z-y= \eta _1\) and \({\tilde{z}} - {\tilde{y}} = \eta _2\), yields

$$\begin{aligned} A_1&\le C R^{\beta -2d} \int _0^t \Bigg ( \int _0^s (s-r)^{-\kappa } \int _{B_R^4} \int _{{{\mathbb {R}}^{9d}}} G_\alpha (t-s, \theta _1) G_\alpha (t-s, \theta _2) \\&\qquad \times G_\alpha (t-s, \theta _3)G_\alpha (t-s, \theta _4) G^{\frac{1}{2q}}_{ \alpha } ({s-r}, \eta _1) G^{\frac{1}{2q}}_{ \alpha } ({s-r}, \eta _2 ) \\&\qquad \times \vert x_1-x_2+\theta _2-\theta _1-w_1\vert ^{- \beta }\vert x_3 -x_4 +\theta _4-\theta _3-w_2\vert ^{- \beta }\\&\qquad \times \vert x_1-x_3 -\theta _1+\theta _4 +\eta _1-\eta _2-w_3\vert ^{- \beta }\\&\qquad \times d \theta _1 d \theta _2 d\theta _3 d\theta _4 d\eta _1d\eta _2d\mu (w_1)d\mu (w_2)d\mu (w_3){dx_1dx_2dx_3dx_4}dr\bigg )^{1/2}ds. \end{aligned}$$

Integrating in the variables \(\theta _2\) and \(\theta _3\) and using the estimate (5.13), we can write

$$\begin{aligned} A_1&\le C R^{\beta -2d} \int _0^t \Bigg ( \int _0^s (s-r)^{-\kappa } \int _{B_R^4} \int _{{{\mathbb {R}}^{7d}}} G_\alpha (t-s, \theta _1) \\&\qquad \times G_\alpha (t-s, \theta _4) G^{\frac{1}{2q}}_{ \alpha } ({s-r}, \eta _1) G^{\frac{1}{2q}}_{ \alpha } ({s-r}, \eta _2 ) \\&\qquad \times \vert x_1-x_2-\theta _1-w_1\vert ^{- \beta }\vert x_3 -x_4 +\theta _4-w_2\vert ^{- \beta }\\&\qquad \times \vert x_1-x_3 -\theta _1+\theta _4 +\eta _1-\eta _2-w_3\vert ^{- \beta }\\&\qquad \times d \theta _1 d\theta _4 d\eta _1d\eta _2d\mu (w_1)d\mu (w_2)d\mu (w_3){dx_1dx_2dx_3dx_4}dr\bigg )^{1/2}ds. \end{aligned}$$

The change of variables \(x_i =R \xi _i \), \(i=1,2,3,4\) yields

$$\begin{aligned} A_1&\le C R^{-\beta /2} \int _0^t \Bigg ( \int _0^s (s-r)^{-\kappa } \int _{B_1^4} \int _{{{\mathbb {R}}^{7d}}} G_\alpha (t-s, \theta _1) \\&\qquad \times G_\alpha (t-s, \theta _4) G^{\frac{1}{2q}}_{ \alpha } ({s-r}, \eta _1) G^{\frac{1}{2q}}_{ \alpha } ({s-r}, \eta _2 ) \\&\qquad \times \vert {\xi _1-\xi _2}-R^{-1}[\theta _1-w_1]\vert ^{- \beta }\vert {\xi _3-\xi _4} +R^{-1}[\theta _4-w_2]\vert ^{- \beta }\\&\qquad \times \vert {\xi _1-\xi _3} +R^{-1} [-\theta _1+\theta _4 +\eta _1-\eta _2-w_3]\vert ^{- \beta }\\&\qquad \times d \theta _1 d\theta _4 d\eta _1d\eta _2d\mu (w_1)d\mu (w_2)d\mu (w_3){d\xi _1d\xi _2d\xi _3d\xi _4}dr\bigg )^{1/2}ds. \end{aligned}$$

Taking into account that

$$\begin{aligned} \sup _{z\in {\mathbb {R}}^d} \int _{B_1} |x +z|^{-\beta } dx <\infty , \end{aligned}$$

and that, by Lemma 5.4,

$$\begin{aligned} \int _{{\mathbb {R}}^d} G^{\frac{1}{2q}}_{ \alpha } ({s-r}, \eta ) d\eta = C ({s-r}) ^{\frac{\kappa }{2}}, \end{aligned}$$

we conclude that

$$\begin{aligned} A_1 \le CR^{-\beta /2}. \end{aligned}$$

Treating the term \(A_2\) similarly verifies the case \(\beta <d\) as well, completing the whole proof. \(\square \)

5.4 Proof of Theorem 2.4

In order to prove 2.4 it suffices to prove tightness and the convergence of the finite dimensional distributions. For the latter we can proceed as in [13] together with the arguments of the proof of Theorem 2.3. The tightness is ensured by the following result and Kolmogorov’s criterion.

Proposition 5.10

Let u(tx) be the solution to (1.1). Then for any \(0\le s < t\le T\) and any \(p\ge 1\) there exists a constant \(C=C(p,T)\) such that

$$\begin{aligned} {\mathbb {E}}\left( \left| \int _{B_R} u(t,x)dx - \int _{B_R}u(s,x)dx\right| ^p \right) \le CR^{\left( d-\frac{\beta }{2}\right) p}(t-s)^{\frac{p}{2}}. \end{aligned}$$

Proof

Let \( \Theta _{x,t,s}\) be given by

$$\begin{aligned} \Theta _{x,t,s}(r,y) = G_{\alpha }(t-r,x-y) 1_{\{r\le t\}} - G_{\alpha }(s-r,x-y) 1_{\{r\le s\}}. \end{aligned}$$

We have, for \(0<s<t<T\),

$$\begin{aligned} \int _{B_R}u(t,x)dx - \int _{B_R}u(s,x)dx= \int _0^T \int _{{\mathbb {R}}^d} \int _{B_R} \Theta _{x,t,s}(r,y) \sigma (u(r,y))dxW(dr,dy). \end{aligned}$$

Now Burkholder inequality implies that, for every \(p\ge 1\),

$$\begin{aligned}&{\mathbb {E}}\left( \left| \int _{B_R} u(t,x)dx - \int _{B_R} u(s,x)dx\right| ^p \right) \\&\quad \le C_{p,T}\left( \int _0^T \int _{{\mathbb {R}}^{2d}} \left( \int _{B_R^2} \Theta _{x,t,s}(r,y) \Theta _{x',t,s}(r,y')dx'dx\right) \gamma (y-y') dy dy'dr \right) ^{\frac{p}{2}}. \end{aligned}$$

Hence it remains to show that

$$\begin{aligned} K_{R}(t,s):= & {} \int _0^T \int _{{\mathbb {R}}^{2d}} \left( \int _{B_R^2} \Theta _{x,t,s}(r,y) \Theta _{x',t,s}(r,y')dx'dx\right) \gamma (y-y')dy dy'dr \nonumber \\\le & {} C R^{2d-\beta } (t-s). \end{aligned}$$
(5.15)

By taking the Fourier transform, we obtain \(K_{R}(t,s)\le C (I_{1}+ I_{2}),\) where

$$\begin{aligned} I_{1}= \int _0^s \int _{{\mathbb {R}}^d} R^d|\xi |^{-d}J_{\frac{d}{2}}^2(R|\xi |)\left| e ^{-(t-r)\vert \xi \vert ^{\alpha }} - e ^{-(s-r)\vert \xi \vert ^{\alpha }}\right| ^2 {\widehat{\gamma }}(\xi )d\xi dr \end{aligned}$$

and

$$\begin{aligned} I_{2}= \int _s^t \int _{{\mathbb {R}}^d} R^d|\xi |^{-d}J_{\frac{d}{2}}^2(R|\xi |)e ^{-2(t-r)\vert \xi \vert ^{\alpha }} {\widehat{\gamma }}(\xi )d\xi dr. \end{aligned}$$

Using \(e ^{-2(t-r)\vert \xi \vert ^{\alpha }}\le 1\) and

$$\begin{aligned} \left| e ^{-(t-r)\vert \xi \vert ^{\alpha }} - e ^{-(s-r)\vert \xi \vert ^{\alpha }}\right| ^2 \le C(t-s) \end{aligned}$$

leads to

$$\begin{aligned} I_1 + I_2\le & {} C(t-s) \int _{{\mathbb {R}}^d} R^d|\xi |^{-d}J_{\frac{d}{2}}^2(R|\xi |) {\widehat{\gamma }}(\xi )d\xi \\= & {} C(t-s)R^{2d-\beta }\int _{{\mathbb {R}}^d} |\xi |^{\beta -2d}J_{\frac{d}{2}}^2(|\xi |) d\xi . \end{aligned}$$

This concludes the proof. \(\square \)

Theorem 2.4 follows by the arguments of the proof of [13, Theorem 1.3] together with Proposition 5.10. Details that, despite being rather lengthy, are directly based on the same arguments that we have used above, and for this reason they are left to the reader.