1 Introduction

We consider the stochastic heat equation in \(\mathbb {R}^\ell \)

$$\begin{aligned} \frac{\partial u}{\partial t} =\frac{1}{2}\Delta u+u \dot{W}\, ,\quad u(0,\cdot )=u_0(\cdot ) \end{aligned}$$
(1.1)

where \(t\ge 0\), \(x\in \mathbb {R}^\ell \)\((\ell \ge 1)\) and \(u_0\) is a Borel measure. Herein, W is a centered Gaussian field, which is white in time and it has a correlated spatial covariance. More precisely, we assume that the noise W is described by a centered Gaussian family \(W=\{ W(\phi ), \phi \in C_c^\infty (\mathbb {R}_+\times \mathbb {R}^\ell )\}\), with covariance

$$\begin{aligned} \mathbb {E}[W(\phi )W(\psi )]=\frac{1}{(2 \pi )^\ell } \int _0^\infty \int _{\mathbb {R}^\ell } {\mathcal {F}}\phi (s,\xi )\overline{{\mathcal {F}}\psi (s,\xi )}\mu (\xi ) d \xi ds, \end{aligned}$$
(1.2)

where \(\mu \) is non-negative measurable function and \({\mathcal {F}}\) denotes the Fourier transform in the spatial variables. To avoid trivial situations, we assume that \(\mu \) is not identical to zero. The inverse Fourier transform of \(\mu \) is in general a distribution defined formally by the expression

$$\begin{aligned} \gamma (x)=\frac{1}{(2 \pi )^\ell }\int _{\mathbb {R}^\ell } e^{i \xi \cdot x}\mu (\xi )d \xi \,. \end{aligned}$$
(1.3)

If \(\gamma \) is a locally integrable function, then it is non-negative definite and (1.2) can be written in Cartesian coordinates

$$\begin{aligned} \mathbb {E}[W(\phi )W(\psi )]=\int _0^\infty \iint _{\mathbb {R}^{2\ell }} \phi (s,x)\psi (s,y)\gamma (x-y)dxdyds\,. \end{aligned}$$
(1.4)

The following two distinct hypotheses on the spatial covariance of W are considered throughout the paper.

  1. (H.1)

    \(\mu \) is integrable, that is \(\int _{\mathbb {R}^\ell }\mu (\xi )d \xi <\infty \). In this case, the inverse Fourier transform of \(\mu (\xi )\) exists and is a bounded continuous function \(\gamma \). Assume in addition that \(\gamma \) is \(\kappa \)-Hölder continuous function at 0.

  2. (H.2)

    \(\mu \) satisfies the following conditions:

    1. (H.2a)

      The inverse Fourier transform of \(\mu (\xi )\) is either the Dirac delta mass at 0 or a nonnegative locally integrable function \(\gamma \).

    2. (H.2b)
      $$\begin{aligned} \int _{ \mathbb {R}^\ell }\frac{\mu (\xi ) }{1+ |\xi |^2}d \xi <\infty \,. \end{aligned}$$
      (1.5)
    3. (H.2c)

      (Scaling) There exists \(\alpha \in (0,2)\) such that \(\mu (c \xi )=c^{\alpha -\ell }\mu (\xi )\) for all positive numbers c.

Hereafter, we denote by \(|\cdot |\) the Euclidean norm in \(\mathbb {R}^\ell \) and by \(x\cdot y\) the usual inner product between two vectors xy in \(\mathbb {R}^\ell \). Condition (H.2b) is known as Dalang’s condition and is sufficient for existence and uniqueness of a random field solution. If \(\gamma \) exists as a function, condition (H.2c) induces the scaling relation \(\gamma (c x)=c^{-\alpha }\gamma (x)\) for all \(c>0\).

Equation (1.1) with noise satisfying condition (H.2) was introduced by Dalang in [9]. In [16], for a large class of initial data, we show that Eq. (1.1) has a unique random field solution under the hypothesis (H.2). Under hypothesis (H.1), we note that \(\gamma \) may be negative, but proceeding as in [18], a simple Picard iteration argument gives the existence and uniqueness of the solution. In addition, in both cases, the solution has finite moments of all positive orders. We give a few examples of covariance structures which are usually considered in literatures.

Example 1.1

Covariance functions satisfying (H.2) includes the Riesz kernel \(\gamma (x)=|x|^{-\eta }\), with \(0<\eta <2\wedge \ell \), the space-time white noise in dimension one, where \(\gamma =\delta _0\), the Dirac delta mass at 0, and the multidimensional fractional Brownian motion, where \(\gamma (x)= \prod _{i=1}^\ell H_i (2H_i-1) |x^i|^{2H_i-2}\), assuming \(\sum _{i=1}^\ell H_i >\ell -1\) and \(H_i >\frac{1}{2}\) for \(i=1,\dots , \ell \). Covariance functions satisfying (H.1) includes \(e^{-|x|^2}\) and the inverse Fourier transform of \(|\xi |^2 e^{-|\xi |^2}\).

Suppose for the moment that \(\dot{W}\) is a space-time white noise and \(u_0\) is a function satisfying

$$\begin{aligned} c\le u_0(x)\le C, \text{ for } \text{ some } \text{ positive } \text{ numbers } c,C. \end{aligned}$$
(1.6)

It is first noted in [7] that there exist positive constants \(c_1,c_2\) such that almost surely

$$\begin{aligned} c_1\le \limsup _{R\rightarrow \infty }(\log R)^{-\frac{2}{3}}{\log \sup _{|x|\le R}u(t,x)} \le c_2\,. \end{aligned}$$
(1.7)

Later Chen shows in [3] that indeed the precise almost sure limit can be computed, namely,

$$\begin{aligned} \lim _{R\rightarrow \infty }(\log R)^{-\frac{2}{3}}{\log \sup _{|x|\le R}u(t,x)}=\frac{3}{4} \left( \frac{2t}{3}\right) ^{\frac{1}{3}}\quad \mathrm {a.s.} \end{aligned}$$
(1.8)

One of the key ingredients in showing (1.8) is the following moment asymptotic result

$$\begin{aligned} \lim _{m\rightarrow \infty }m^{-3}\log \mathbb {E}u(t,x)^m=\frac{t}{24}\,. \end{aligned}$$
(1.9)

Thanks to the scaling property of the space-time white noise, Chen has managed to derive (1.9) from the following long term asymptotic result

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{1}{t}\log \mathbb {E}u(t,x)^m= {\mathcal {E}}_m \end{aligned}$$
(1.10)

where the constant \({\mathcal {E}}_m\) grows as \(\frac{1}{24}m^3\) when \(m\rightarrow \infty \).

Under condition (1.6), analogous results for other kinds of noises are also obtained in [3]. More precisely, for noises satisfying (H.1)

$$\begin{aligned} \lim _{R\rightarrow \infty }(\log R)^{-\frac{1}{2}}\log \sup _{|x|\le R}u(t,x)=\sqrt{2\ell \gamma (0)t}\quad \mathrm {a.s.}\,, \end{aligned}$$
(1.11)

and for noises satisfying (H.2),

$$\begin{aligned} \lim _{R\rightarrow \infty }(\log R)^{-\frac{2}{4- \alpha }}\sup _{|x|\le R}\log u(t,x)=\frac{4- \alpha }{2}\ell ^{\frac{2}{4- \alpha }} \left( \frac{{\mathcal {E}}_H(\gamma )}{2- \alpha }t \right) ^{\frac{2- \alpha }{4- \alpha }} \quad \mathrm {a.s.}\,, \end{aligned}$$
(1.12)

where the variational quantity \({\mathcal {E}}_H(\gamma )\) is introduced in (3.3).

On the other hand, it is known that Eq. (1.1) has a unique random field solution under either (H.1) or (H.2) provided that \(u_0\) satisfies

$$\begin{aligned} p_t*|u_0|(x) <\infty \quad \forall t>0,x\in \mathbb {R}^\ell \,. \end{aligned}$$
(1.13)

In the above and throughout the remaining of the article, \(*\) denotes the convolution in spatial variables. Hence, condition (1.6) excludes other initial data of interests such as compactly supported measures. It is our purpose in the current paper to investigate the almost sure spatial asymptotic of the solutions corresponding to these initial data.

Upon reviewing the method in obtaining (1.8) described previously, one first seeks for an analogous result to (1.10) for general initial data. In fact, it is noted in [16] that for every \(u_0\) satisfying (1.13), one has

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{1}{t}\log \sup _{x\in \mathbb {R}} \mathbb {E}\left( \frac{u(t,x)}{p_t*u_0(x)} \right) ^m={\mathcal {E}}_m, \end{aligned}$$
(1.14)

where \({\mathcal {E}}_m\) is a constant whose asymptotic as \(m\rightarrow \infty \) is known. It is suggestive from (1.14) that with a general initial datum, one should normalized u(tx) in (1.8) (and (1.9)) by the factor \(p_t*u_0(x)\). Therefore, we anticipate the following almost sure spatial asymptotic result.

Conjecture 1.2

Assume that \(u_0\) satisfies (1.13). Under (H.1) we have

$$\begin{aligned} \lim _{R\rightarrow \infty }(\log R)^{-\frac{1}{2}} \sup _{|x|\le R}\left( \log u(t,x)-\log p_t*u_0(x) \right) =\sqrt{2\ell \gamma (0)t}\quad \mathrm {a.s.} \end{aligned}$$
(1.15)

Under (H.2), we have

$$\begin{aligned}&\lim _{R\rightarrow \infty }(\log R)^{-\frac{2}{4- \alpha }}\sup _{|x|\le R}\left( \log u(t,x)-\log p_t*u_0(x) \right) \nonumber \\&\quad =\frac{4- \alpha }{2}\ell ^{\frac{2}{4- \alpha }} \left( \frac{{\mathcal {E}}_H(\gamma )}{2- \alpha }t \right) ^{\frac{2- \alpha }{4- \alpha }} \quad \mathrm {a.s.} \end{aligned}$$
(1.16)

In the particular case of space-time white noise, we conjecture that

$$\begin{aligned} \lim _{R\rightarrow \infty }(\log R)^{-\frac{2}{3}}{\sup _{|x|\le R}(\log u(t,x)-\log p_t*u_0(x))}={\frac{3}{4} \left( \frac{2t}{3}\right) ^{\frac{1}{3}} }\quad \mathrm {a.s.} \end{aligned}$$
(1.17)

In the case of space-time white noise, note that if \(u_0\) satisfies the condition (1.6), (1.17) is no different than (1.8). On the other hand, if \(u_0\) is a Dirac delta mass at \(x_0\), (1.17) precisely describes the spatial asymptotic of \(\log u(t,x)\): at large spatial sites, \(\log u(t,x)\) is concentrated near a logarithmic perturbation of the parabola \(-\frac{1}{2t} (x-x_0)^2\). More precisely, (1.17) with this specific initial datum reduces to

$$\begin{aligned} \lim _{R\rightarrow \infty }(\log R)^{-\frac{2}{3}} \sup _{|x|\le R}\left( \log u(t,x)+\frac{(x-x_0)^2}{2t} \right) ={\frac{3}{4} \left( \frac{2t}{3}\right) ^{\frac{1}{3}}}\,. \end{aligned}$$
(1.18)

While a complete answer for Conjecture 1.2 (including (1.18)) is still undetermined, the current paper offers partial results, focusing on initial data with compact supports, especially Dirac masses. To unify the notation, we denote

$$\begin{aligned} \bar{\alpha }= \left\{ \begin{array}{ll} 0&{}\text{ if } \text{(H.1) } \text{ holds },\\ \alpha &{}\text{ if } \text{(H.2) } \text{ holds }, \end{array} \right. \quad \text{ and }\quad {\mathcal {E}}= \left\{ \begin{array}{ll} \gamma (0)&{}\text{ if } \text{(H.1) } \text{ holds },\\ {\mathcal {E}}_H(\gamma )&{}\text{ if } \text{(H.2) } \text{ holds }, \end{array} \right. \end{aligned}$$
(1.19)

where the variational quantity \({\mathcal {E}}_H(\gamma )\) is introduced below in (3.3). For bounded covariance functions, we obtain the following result.

Theorem 1.3

Assume that (H.1) holds and \(u_0=\delta (\cdot - x_0)\) for some \(x_0\in \mathbb {R}^\ell \). Then (1.15) holds.

For noises satisfying (H.2), or for initial data with compact supports, the picture is less complete.

Theorem 1.4

Assume that \(u_0\) is a non-negative measure with compact support and either (H.1) or (H.2) holds. Then we have

$$\begin{aligned}&\limsup _{R\rightarrow \infty }(\log R)^{-\frac{2}{4- \bar{\alpha }}}\sup _{|x|\le R}\left( \log u(t,x)-\log p_t*u_0(x) \right) \nonumber \\&\quad \le \frac{4- \bar{\alpha }}{2}\ell ^{\frac{2}{4- \bar{\alpha }}} \left( \frac{{\mathcal {E}}}{2- \bar{\alpha }}t \right) ^{\frac{2- \bar{\alpha }}{4- \bar{\alpha }}} \quad \mathrm {a.s.} \end{aligned}$$
(1.20)

For initial data satisfying (1.6), the lower bound of (1.16) is proved in [3] using a localization argument initiated from [7]. In our situation, a technical difficulty arises in applying this localization procedure, which leads to the missing lower bound in Theorem 1.4. A detailed explanation is given at the beginning of Sect. 6.2. As an attempt to obtain the exact spatial asymptotics, we propose an alternative result which is described below. We need to introduce a few more notation. For each \(\varepsilon >0\), we denote

$$\begin{aligned} \gamma _ \varepsilon (x)=(2 \pi )^{-\ell }\int _{\mathbb {R}^\ell }e^{- 2\varepsilon |\xi |^2}e^{i \xi \cdot x}\mu (\xi )d \xi \,, \end{aligned}$$
(1.21)

which is a bounded non-negative definite function. Let \(W_ \varepsilon \) be a centered Gaussian field defined by

$$\begin{aligned} W_ \varepsilon (\phi )=W(p_{\varepsilon }*\phi ) \end{aligned}$$
(1.22)

for all \(\phi \in C_{c}^{\infty }(\mathbb {R}_+\times \mathbb {R}^\ell )\). In the above, \(p_ \varepsilon =(2 \pi \varepsilon )^{-\ell /2}e^{-|x|^2/(2 \varepsilon )}\). The covariance structure of \(W_ \varepsilon \) is given by

$$\begin{aligned} \mathbb {E}[W_ \varepsilon (\phi )W_ \varepsilon (\psi )]&=\frac{1}{(2 \pi )^\ell } \int _0^\infty \int _{\mathbb {R}^\ell } {\mathcal {F}}\phi (s,\xi )\overline{{\mathcal {F}}\psi (s,\xi )}e^{-2 \varepsilon |\xi |^2}\mu ( \xi ) d \xi ds\nonumber \\&=\int _0^\infty \iint _{\mathbb {R}^{2\ell }} \phi (s,x)\psi (s,y)\gamma _ \varepsilon (x-y)dxdyds \end{aligned}$$
(1.23)

for all \(\phi ,\psi \in C_c^{\infty }(\mathbb {R}_+\times \mathbb {R}^\ell )\). In other words, \(W_ \varepsilon \) is white in time and correlated in space with spatial covariance function \(\gamma _ \varepsilon \), which satisfies (H.1). Under condition (H.2c), \(\gamma _ \varepsilon \) satisfies the scaling relation

$$\begin{aligned} \gamma _ \varepsilon (x)=\varepsilon ^{-\frac{\alpha }{2}}\gamma _1(\varepsilon ^{-\frac{1}{2}} x)\quad \text{ for } \text{ all }\quad \varepsilon >0,x\in \mathbb {R}^\ell \,. \end{aligned}$$
(1.24)

Let \(u_ \varepsilon \) be the solution to Eq. (1.1) with \(\dot{W}\) replaced by \(\dot{W}_ \varepsilon \). It is expected that as \(\varepsilon \downarrow 0\), \(u_ \varepsilon (t,x)\) converges to u(tx) in \(L^2(\Omega )\) for each (tx), see [1] for a proof when the initial data is a bounded function. The following result describes spatial asymptotic of the family of random fields \(\{u_\varepsilon \}_{\varepsilon \in (0,1)}\).

Theorem 1.5

Assume that \(u_0\) is a non-negative measure with compact support and either (H.1) or (H.2) holds. Then

$$\begin{aligned}&\limsup _{R\rightarrow \infty }(\log R)^{-\frac{2}{4- \bar{\alpha }}}\sup _{|x|\le R,\varepsilon \in (0,1)}\left( \log u_ \varepsilon (t,x)-\log p_t*u_0(x) \right) \nonumber \\&\quad \le \frac{4- \bar{\alpha }}{2}\ell ^{\frac{2}{4- \bar{\alpha }}} \left( \frac{{\mathcal {E}}}{2- \bar{\alpha }}t \right) ^{\frac{2- \bar{\alpha }}{4- \bar{\alpha }}}\quad \mathrm {a.s.} \end{aligned}$$
(1.25)

If, in particular, \(u_0=\delta (\cdot -x_0)\) for some \(x_0 \in \mathbb {R}^{\ell }\), then

$$\begin{aligned}&\lim _{R\rightarrow \infty }(\log R)^{-\frac{2}{4- \bar{\alpha }}}\sup _{|x|\le R,\varepsilon \in (0,1)}\left( \log u_ \varepsilon (t,x) + \frac{(x-x_0)^2}{2t} \right) \nonumber \\&\quad =\frac{4- \bar{\alpha }}{2}\ell ^{\frac{2}{4- \bar{\alpha }}} \left( \frac{{\mathcal {E}}}{2- \bar{\alpha }}t \right) ^{\frac{2- \bar{\alpha }}{4- \bar{\alpha }}}\quad \mathrm {a.s.} \end{aligned}$$
(1.26)

Neither one of (1.16) and (1.26) is stronger than the other. While the result of Theorem 1.5 relates to the solution of (1.1) indirectly, it is certainly interesting. In Hairer’s theory of regularity structures (cf. [14]), one first regularizes the noise to obtain a sequence of approximated solutions. The solution of the corresponding stochastic partial differential equation is then constructed as the limiting object of this sequence. From this point of view, (1.26) provides a unified characteristic of the sequence of approximating solutions \(\{u_ \varepsilon \}_{\varepsilon \in (0,1)}\), which approaches the solution u as \(\varepsilon \downarrow 0\). The proof of (1.26) does not rely on localization, rather, on the Gaussian nature of the noise. This leads to a possibility of extending (1.26) to temporal colored noises, which will be a topic for future research.

The remainder of the article is structured as follows: In Sect. 2 we briefly summarize the theory of stochastic integrations and well-posedness results for (1.1). In Sect. 3 we introduce some variational quantities which are related to the spatial asymptotics. In Sect.  4 we derive some Feynman–Kac formulas of the solution and its moments, these formulas play a crucial role in our consideration. In Sect. 5 we investigate the high moment asymptotics and Hölder regularity of the solutions of (1.1) with respect to various parameters. The results in Sect. 5 are used to obtain upper bounds in (1.15) and (1.16). This is presented in Sect. 6, where we also give a proof of the lower bounds in Theorems 1.31.4 and 1.5.

2 Preliminaries

We introduce some notation and concepts which are used throughout the article. The space of Schwartz functions is denoted by \({\mathcal {S}}(\mathbb {R}^\ell )\). The Fourier transform of a function \(g\in {\mathcal {S}}(\mathbb {R}^\ell )\) is defined with the normalization

$$\begin{aligned} {\mathcal {F}}g(\xi )=\int _{\mathbb {R}^\ell }e^{-i \xi \cdot x}g(x)dx\,, \end{aligned}$$

so that the inverse Fourier transform is given by \({\mathcal {F}}^{-1}g(\xi )=(2 \pi )^{-\ell }{\mathcal {F}}g(- \xi )\). The Plancherel identity with this normalization reads

$$\begin{aligned} \int _{\mathbb {R}^\ell }|f(x)|^2dx=\frac{1}{(2 \pi )^\ell }\int _{\mathbb {R}^\ell }|{\mathcal {F}}f(\xi )|^2 d \xi \,. \end{aligned}$$

Let us now describe stochastic integrations with respect to W. We can interpret W as a Brownian motion with values in an infinite dimensional Hilbert space. In this context, the stochastic integration theory with respect to W can be handled by classical theories (see for example, [11]). We briefly recall the main features of this theory.

We denote by \(\mathfrak {H}_0\) the Hilbert space defined as the closure of \(\mathcal {S}(\mathbb {R}^\ell )\) under the inner product

$$\begin{aligned} \langle g, h \rangle _{ \mathfrak {H}_0}=\frac{1}{(2\pi )^\ell } \int _{\mathbb {R}^\ell }\mathcal {F}g(\xi )\overline{\mathcal {F}h(\xi )} \mu (\xi ) d \xi \,. \end{aligned}$$
(2.1)

which can also be written as

$$\begin{aligned} \langle g, h \rangle _{ \mathfrak {H}_0}=\iint _{\mathbb {R}^\ell \times \mathbb {R}^\ell }g(x)h(y)\gamma (x-y)dxdy \,. \end{aligned}$$
(2.2)

If \(\gamma \) satisfies (H.1), then \(\mathfrak {H}_0\) contains distributions such as Dirac delta masses. The Gaussian family W can be extended to an isonormal Gaussian process \(\{W(\phi ), \phi \in L^2(\mathbb {R}_+, \mathfrak {H}_0)\}\) parametrized by the Hilbert space \(\mathfrak {H}:=L^2(\mathbb {R}_+, \mathfrak {H}_0)\). For any \(t\ge 0\), let \(\mathcal {F}_{t}\) be the \(\sigma \)-algebra generated by W up to time t. Let \(\Lambda \) be the space of \(\mathfrak {H}_0\)-valued predictable processes g such that \(\mathbb {E}\Vert g\Vert _{\mathfrak {H}}^{2}<\infty \). Then, one can construct (cf. [16]) the stochastic integral \(\int _0^\infty \int _{\mathbb {R}^\ell }g(s,x) \, W(ds,dx)\) such that

$$\begin{aligned} \mathbb {E}\left( \int _0^\infty \int _{\mathbb {R}^\ell }g(s,x) \, W(ds,dx) \right) ^{2} = \mathbb {E}\Vert g\Vert _{\mathfrak {H}}^{2}. \end{aligned}$$
(2.3)

To emphasize the variables, we sometimes write \(\Vert g(s,y)\Vert _{\mathfrak {H}_{s,y}}\) for \(\Vert g\Vert _\mathfrak {H}\). Stochastic integration over finite time interval can be defined easily

$$\begin{aligned} \int _0^t\int _{\mathbb {R}^\ell }g(s,x) \, W(ds,dx)= \int _0^\infty \int _{\mathbb {R}^\ell }1_{[0,t]}(s) g(s,x) \, W(ds,dx)\,. \end{aligned}$$

Finally, the Burkholder’s inequality in this context reads

$$\begin{aligned} \left\| \int _0^t\int _{\mathbb {R}^\ell }g(s,x)W(ds,dx)\right\| _{L^p(\Omega )}\le \sqrt{4 p}\left\| \int _0^t\Vert g(s,\cdot )\Vert ^2_{\mathfrak {H}_0}ds \right\| ^{\frac{1}{2}}_{L^{\frac{p}{2}}(\Omega )}\,, \end{aligned}$$
(2.4)

which holds for all \(p\ge 2\) and \(g\in \Lambda \). A useful application of (2.4) is the following result

Lemma 2.1

Let \(m\ge 2\) be an integer, f be a deterministic function on \([0,\infty )\times \mathbb {R}^\ell \) and \(u=\{u(s,x): s\ge 0,x\in \mathbb {R}^\ell \}\) be a predictable random field such that

$$\begin{aligned} \mathcal {U}_m(s):=\sup _{x\in \mathbb {R}^\ell }\Vert u(s,x)\Vert _{L^m(\Omega )}<\infty \,. \end{aligned}$$

Under hypothesis (H.2), we have

$$\begin{aligned} \left\| \int _0^t\int _{\mathbb {R}^\ell }f(s,y)u(s,y)W(ds,dy) \right\| _{L^m(\Omega )} \le \sqrt{4m} \Vert |f(s,y)| \mathbf {1}_{[0,t]}(s)\mathcal {U}_m(s)\Vert _{\mathfrak {H}_{s,y}}\,; \end{aligned}$$

and under hypothesis (H.1), we have

$$\begin{aligned}&\left\| \int _0^t \int _{\mathbb {R}^{\ell }} f(s,y)u(s,y) W(ds,dy)\right\| _{L^m(\Omega )} \\&\quad \le \sqrt{4m \gamma (0)} \left( \int _0^t \left( \int _{\mathbb {R}^{\ell }} f(s,y) dy \mathcal {U}_m(s)\right) ^2 ds \right) ^{\frac{1}{2}}\,. \end{aligned}$$

Proof

We consider only the hypothesis (H.2), the other case is obtained similarly. In view of Burkholder inequality (2.4) and Minkowski inequality, it suffices to show

$$\begin{aligned} \int _0^t\Vert \Vert f(s,\cdot )u(s,\cdot )\Vert ^2_{\mathfrak {H}_0}\Vert _{L^{\frac{m}{2}}(\Omega )} ds\le \Vert |f(s,y)| \mathbf {1}_{[0,t]}(s)\mathcal {U}_m(s)\Vert _{\mathfrak {H}_{s,y}}^2\,. \end{aligned}$$
(2.5)

In fact, using (2.2) and Minkowski inequality, the left-hand side in the above is at most

$$\begin{aligned} \int _0^t\iint _{\mathbb {R}^\ell \times \mathbb {R}^\ell }|f(s,x)f(s,y)| \Vert u(s,x)u(s,y)\Vert _{L^{\frac{m}{2}}(\Omega )}\gamma (x-y)dxdyds\,. \end{aligned}$$

Note in addition that by Cauchy–Schwarz inequality,

$$\begin{aligned} \Vert u(s,x)u(s,y)\Vert _{L^{\frac{m}{2}}(\Omega )}\le \Vert u(s,x)\Vert _{L^m(\Omega )}^{1/2}\Vert u(s,y)\Vert _{L^m(\Omega )}^{1/2}\le \mathcal {U}_m(s). \end{aligned}$$

From here, (2.5) is transparent and the proof is complete. \(\square \)

We now state the definition of the solution to Eq. (1.1) using the stochastic integral introduced previously.

Definition 2.2

Let \(u=\{u(t,x), t\ge 0, x \in \mathbb {R}^\ell \}\) be a real-valued predictable stochastic process such that for all \(t \ge 0\) and \(x\in \mathbb {R}^\ell \) the process \(\{p_{t-s}(x-y)u(s,y) \mathbf {1}_{[0,t]}(s), 0 \le s \le t, y \in \mathbb {R}^\ell \}\) is an element of \(\Lambda \).

We say that u is a mild solution of (1.1) if for all \(t \in [0,T]\) and \(x\in \mathbb {R}^\ell \) we have

$$\begin{aligned} u(t,x)=p_t*u_0 (x) + \int _0^t \int _{\mathbb {R}^\ell }p_{t-s}(x-y)u(s,y) W(ds,dy) \quad a.s. \end{aligned}$$
(2.6)

The following existence and uniqueness result has been proved in [16] under hypothesis (H.2). Under hypothesis (H.1), one can proceed as in [18], using a simple Picard iteration argument to obtain the existence and uniqueness of the solution.

Theorem 2.3

Suppose that \(u_0\) satisfies (1.13) and the spectral measure \(\mu \) satisfies hypotheses (H.1) or (H.2). Then there exists a unique solution to Eq. (1.1).

When \(u_0=\delta (\cdot -z)\), we denote the corresponding unique solution by \(\mathcal {Z}(z; t, x)\). In particular \(\mathcal {Z}(z;\cdot ,\cdot ) \) is predictable and satisfies

$$\begin{aligned} \mathcal {Z}(z;t,x)=p_t(x-z)+\int _0^t\int _{\mathbb {R}^\ell } p_{t-s}(x-y)\mathcal {Z}(z;s,y)W(ds,dy) \end{aligned}$$
(2.7)

for all \(t\ge 0\) and \(x\in \mathbb {R}^\ell \).

Next, we record a Gronwall-type lemma which will be useful later.

Lemma 2.4

Suppose \(\alpha \in [0,2)\) and f is a locally bounded function on \([0,\infty )\) such that

$$\begin{aligned} f_t\le A\int _0^t\left( \frac{s(t-s)}{t} \right) ^{-\frac{\alpha }{2}}f_s ds+Bg_t \quad \text{ for } \text{ all }\quad t\ge 0 \,, \end{aligned}$$

where AB are positive constants and g is non-decreasing function. Then there exists a constant \(C_ \alpha \) such that

$$\begin{aligned} f_t\le 2Bg_t e^{C_ \alpha A^{\frac{2}{2- \alpha }}t} \quad \text{ for } \text{ all }\quad t\ge 0\,. \end{aligned}$$

Proof

Fix \(T>0\). For each \(\rho >0\), denote \(D_ \rho =\sup _{t\in [0,T]}f_t e^{-\rho t}\). It follows that

$$\begin{aligned} D_ \rho \le A\int _0^t\left( \frac{s(t-s)}{t} \right) ^{-\frac{\alpha }{2}} e^{-\rho (t-s)}ds D_ \rho +Bg_T\,. \end{aligned}$$

It is easy to see

$$\begin{aligned} \int _0^t\left( \frac{s(t-s)}{t} \right) ^{-\frac{\alpha }{2}} e^{-\rho (t-s)}ds&\le 2\int _{\frac{t}{2}}^t\left( \frac{s(t-s)}{t} \right) ^{-\frac{\alpha }{2}} e^{-\rho (t-s)}ds\\&\le 2^{1+\frac{\alpha }{2}}\int _0^\infty s^{-\frac{\alpha }{2}}e^{-\rho s}ds\\&\le C \rho ^{-\frac{2-\alpha }{2}} \end{aligned}$$

for some suitable constant C depending only on \(\alpha \). We then choose \(\rho =(2AC)^{\frac{2}{2- \alpha }} \) so that \(AC\rho ^{-\frac{2- \alpha }{2}}=\frac{1}{2}\). This leads to \(D_ \rho \le 2Bg_T\), which implies the result. \(\square \)

Let us conclude this section by introducing a few key notation which we will use throughout the article. Let \(B=(B(t),t\ge 0)\) denote a standard Brownian motion in \(\mathbb {R}^\ell \) starting at the origin. For each \(t>0\), we denote

$$\begin{aligned} B_{0,t}(s)=B(s)-\frac{s}{t} B(t)\quad \forall s\in [0,t]\,. \end{aligned}$$
(2.8)

The process \(B_{0,t}=(B_{0,t}(s),0\le s\le t)\) is independent from B(t) and is a Brownian bridge which starts and ends at the origin. An important connection between B and \(B_{0,t}\) is the following identity. For every \(\lambda \in (0,1)\) and every bounded measurable function F on \(C([0,\lambda t];\mathbb {R}^d) \) we have

$$\begin{aligned}&\mathbb {E}\left[ F(\{B_{0,t} (s);0\le s\le \lambda t\}) \right] \nonumber \\&\quad =(1- \lambda )^{-\frac{d}{2}}\mathbb {E}\left[ \exp \left\{ -\frac{|B(\lambda t)|^2}{2(1- \lambda )t} \right\} F(\{B (s);0\le s\le \lambda t\}) \right] \,. \end{aligned}$$
(2.9)

This is in fact an application of Girsanov’s theorem, see [16, Eq. (2.8)] for more details.

Let \(B^1,B^2,\dots \) be independent copies of B and \(B^{1}_{0,t},B^2_{0,t},\dots \) be the corresponding Brownian bridges. An important quantity which appears frequently in our consideration is

$$\begin{aligned} \Theta _t(m):=\sup _{s\in (0,t]} \mathbb {E}\exp \left\{ \int _0^s\sum _{1\le j<k\le m}\gamma (B_{0,s}^j(r)-B_{0,s}^k(r))dr \right\} \,. \end{aligned}$$
(2.10)

From the proof of Proposition 4.2 in [16], it is easy to see that under one of the hypotheses (H.1) and (H.2), \(\Theta _t(m) < \infty \) for any \(t>0\). Finally, \(A\lesssim E\) means \(A\le CE\) for some positive constant C, independent from all the terms appearing in E.

3 Variations

We introduce two variational quantities and give their basic properties and relations. The high moment asymptotic is governed by a variational quantity which is known as the Hartree energy (cf. [8]). If there exists a locally integrable function \(\gamma \) whose Fourier transform is \(\mu \), then the Hartree energy can be expressed as

$$\begin{aligned} {\mathcal {E}}_H(\gamma )=\sup _{g\in {\mathcal {G}}}\left\{ \int _{\mathbb {R}^\ell } \int _{\mathbb {R}^\ell }\gamma (x-y)g^2(x)g^2(y)dxdy-\int _{\mathbb {R}^\ell }|\nabla g(x) |^2dx \right\} \,, \end{aligned}$$
(3.1)

where \({\mathcal {G}}\) is the set

$$\begin{aligned} {\mathcal {G}}=\left\{ g\in W^{1,2}(\mathbb {R}^\ell ):\Vert g\Vert _{L^2(\mathbb {R}^\ell )}=1\right\} \,. \end{aligned}$$
(3.2)

The subscript H stands for “Hartree”. We can also write this variation in Fourier mode. Indeed, the presentation (1.3) leads to

$$\begin{aligned} \iint _{\mathbb {R}^\ell \times \mathbb {R}^\ell }\gamma (x-y)g^2(x)g^2(y)dxdy&=(2 \pi )^{-\ell }\int _{\mathbb {R}^\ell }|{\mathcal {F}}[g^2](\xi )|^2\mu (\xi )d \xi \\&=(2 \pi )^{-3\ell }\int _{\mathbb {R}^\ell }|{\mathcal {F}}g*{\mathcal {F}}g(\xi )|^2\mu (\xi )d \xi \,. \end{aligned}$$

Setting \(h=(2 \pi )^{-\frac{\ell }{2}}{\mathcal {F}}g\) so that \(\Vert h\Vert _{L^2}=1\), we arrive at

$$\begin{aligned} {\mathcal {E}}_H(\gamma )=\sup _{h\in {\mathcal {A}}}\left\{ (2 \pi )^{-\ell } \int _{\mathbb {R}^\ell } |h*h(\xi )|^2 \mu ( \xi )d \xi -\int _{\mathbb {R}^\ell }|h(\xi )|^2|\xi |^2 d \xi \right\} \end{aligned}$$
(3.3)

where

$$\begin{aligned} {\mathcal {A}}=\left\{ h:\mathbb {R}^\ell \rightarrow \mathbb {C}\,\Big |\,\Vert h\Vert _{L^2(\mathbb {R}^\ell )}=1,\int _{\mathbb {R}^\ell }|\xi |^2|h(\xi )|^2 d \xi <\infty \text{ and } \overline{h(\xi )}=h(- \xi )\right\} \,. \end{aligned}$$

Under (H.1), from (3.1), we bound \(\gamma (x-y)\) from above by \(\gamma (0)\), it follows that \({\mathcal {E}}_H(\gamma )\le \gamma (0)\), which is finite. The fact that this variation (either in the form (3.1) or (3.3)) is finite under the condition (H.2) is not immediate. In some special cases, this is verified in [6] and [5].

Proposition 3.1

Suppose (1.5) holds. Then \({\mathcal {E}}_H(\gamma )\) is finite.

Proof

Our proof is based on the argument in [5, Proposition 3.1]. Here, however, we work on the frequency space and use the presentation (3.3). Let h be in \({\mathcal {A}}\). Applying Cauchy–Schwarz inequality yields

$$\begin{aligned} |h*h(\xi )|^2=\left| \int _{\mathbb {R}^\ell }h(\xi - \xi ')h(\xi ')d \xi '\right| ^2 \le \int _{\mathbb {R}^\ell }|h(\xi - \xi ')|^2d \xi '\int _{\mathbb {R}^\ell }|h(\xi ')|^2d \xi '=1\,. \end{aligned}$$

On the other hand, using the elementary inequality

$$\begin{aligned} |\xi |^2\le 2|\xi - \xi '|^2+2|\xi '|^2 \end{aligned}$$

and Cauchy–Schwarz inequality, we also get

$$\begin{aligned} |\xi |^2 |h*h(\xi )|^2&\le {2}\left| \int _{\mathbb {R}^\ell }h(\xi -\xi ')|\xi - \xi '|h(\xi ')d \xi ' \right| ^2+{2}\left| \int _{\mathbb {R}^\ell }h(\xi -\xi ')|\xi '|h(\xi ')d \xi ' \right| ^2\\&\le { 4}\int _{\mathbb {R}^\ell }|h(\xi ')|^2|\xi '|^2 d \xi '\,. \end{aligned}$$

Then, for every \(R>0\) we have

$$\begin{aligned} \int _{\mathbb {R}^\ell }|h*h(\xi )|^2 \mu (\xi )d \xi&=\int _{|\xi |\le R}|h*h(\xi )|^2 \mu (\xi )d \xi +\int _{|\xi |>R}|h*h(\xi )|^2 \mu (\xi )d \xi \\&\le \int _{|\xi |\le R} \mu (\xi )d \xi +{4}\int _{|\xi |>R}\frac{ \mu (\xi )}{|\xi |^2} d \xi \int _{\mathbb {R}^\ell }|h(\xi ')|^2|\xi '|^2 d \xi '\,. \end{aligned}$$

We now choose R sufficiently large so that \({ 4}(2 \pi )^{-\ell } \int _{|\xi |>R}\frac{ \mu (\xi )}{|\xi |^2} d \xi <1\). This implies

$$\begin{aligned} (2 \pi )^{-\ell }\int _{\mathbb {R}^\ell }|h*h(\xi )|^2 \mu (\xi )d \xi -\int _{\mathbb {R}^\ell }|h(\xi )|^2|\xi |^2 d \xi \le (2 \pi )^{-\ell }\int _{|\xi |\le R} \mu (\xi )d \xi \end{aligned}$$

for all h in \({\mathcal {A}}\), which finishes the proof. \(\square \)

In establishing the lower bound of spatial asymptotic, another variation arises, which is given by

$$\begin{aligned} {\mathcal {M}}(\gamma )=\sup _{g\in {\mathcal {G}}}\left\{ \left( \int _{\mathbb {R}^\ell } \int _{\mathbb {R}^\ell }\gamma (x-y)g^2(x)g^2(y)dxdy\right) ^{\frac{1}{2}} -\frac{1}{2}\int _{\mathbb {R}^\ell }|\nabla g(x) |^2dx \right\} \,, \end{aligned}$$
(3.4)

or alternatively in frequency mode

$$\begin{aligned} {\mathcal {M}}(\gamma )=\sup _{h\in {\mathcal {A}}}\left\{ \left( (2 \pi )^{-\ell } \int _{\mathbb {R}^\ell } |h*h(\xi )|^2 \mu ( \xi )d \xi \right) ^{\frac{1}{2}}-\frac{1}{2}\int _{\mathbb {R}^\ell }|h(\xi )|^2|\xi |^2 d \xi \right\} \,. \end{aligned}$$
(3.5)

Lemma 3.2

\(\lim _{\varepsilon \rightarrow 0}{\mathcal {E}}_H(\gamma _\varepsilon )={\mathcal {E}}_H(\gamma ) \) and \(\lim _{\varepsilon \rightarrow 0}{\mathcal {M}}(\gamma _\varepsilon )={\mathcal {M}}(\gamma ) \), where we recall that \(\gamma _\varepsilon \) is defined in (1.21).

Proof

We only prove the first limit, the second limit is proved analogously. Let g be in \({\mathcal {G}}\). Note that

$$\begin{aligned}&\liminf _{\varepsilon \downarrow 0}\iint _{\mathbb {R}^\ell \times \mathbb {R}^\ell }\gamma _\varepsilon (x-y)g^2(x)g^2(y)dxdy\\&\quad =\liminf _{\varepsilon \downarrow 0}(2 \pi )^{-3\ell }\int _{\mathbb {R}^\ell }|{\mathcal {F}}g*{\mathcal {F}}g(\xi )|^2e^{-2 \varepsilon |\xi |^2} \mu (\xi )d \xi \\&\quad \ge (2 \pi )^{-3\ell }\int _{\mathbb {R}^\ell }|{\mathcal {F}}g*{\mathcal {F}}g(\xi )|^2 \mu (\xi )d \xi = \iint _{\mathbb {R}^\ell \times \mathbb {R}^\ell }\gamma (x-y)g^2(x)g^2(y)dxdy \end{aligned}$$

by Fatou’s lemma. Since \({\mathcal {E}}_H(\gamma _\varepsilon )\) is finite, we have

$$\begin{aligned} \iint _{\mathbb {R}^\ell \times \mathbb {R}^\ell }\gamma _\varepsilon (x-y)g^2(x)g^2(y)dxdy-\int _{\mathbb {R}^\ell }|\nabla g(x)|^2 dx\le {\mathcal {E}}_H(\gamma _\varepsilon )\,. \end{aligned}$$

Sending \(\varepsilon \) to 0 yields

$$\begin{aligned} \iint _{\mathbb {R}^\ell \times \mathbb {R}^\ell }\gamma (x-y)g^2(x)g^2(y)dxdy-\int _{\mathbb {R}^\ell }|\nabla g(x)|^2 dx\le \liminf _{\varepsilon \downarrow 0}{\mathcal {E}}_H(\gamma _\varepsilon )\,. \end{aligned}$$

Since the above inequality holds for every g in \({\mathcal {G}}\), we obtain \({\mathcal {E}}_H(\gamma )\le \liminf _{\varepsilon \downarrow 0}{\mathcal {E}}_H(\gamma _\varepsilon )\). On the other hand, it is evident (from (3.3)) that \({\mathcal {E}}_H(\gamma _\varepsilon )\le {\mathcal {E}}_H(\gamma )\). This concludes the proof.\(\square \)

Under the scaling condition (H.2c), \({\mathcal {E}}_H\) and \({\mathcal {M}}\) are linked together by the following result.

Proposition 3.3

Assuming condition (H.2c), \({\mathcal {E}}_H(\gamma )\) is finite if and only if \({\mathcal {M}}(\gamma )\) is finite. In addition,

$$\begin{aligned} {\mathcal {M}}(\gamma )=\frac{4- \alpha }{4} \left( \frac{2{\mathcal {E}}_H(\gamma )}{2- \alpha } \right) ^{\frac{2- \alpha }{4- \alpha }}\,. \end{aligned}$$

Before giving the proof, let us see how (3.1) and (3.4) are connected to a certain interpolation inequality. Under scaling condition (H.2c), it is a routine procedure in analysis to connect the finiteness of \({\mathcal {E}}_H(\gamma )\) with a certain interpolation inequality. For instance, when \(\gamma = \delta \) and \(\ell =1\), the fact that

$$\begin{aligned} \sup _{g\in {\mathcal {G}}}\left\{ \int _{\mathbb {R}}g^4(x)dx-\int _\mathbb {R}|g'(x)|^2dx \right\} <\infty \end{aligned}$$

is equivalent to the following Gagliardo–Nirenberg inequality

$$\begin{aligned} \Vert g\Vert _{L^4}\le C\Vert g\Vert ^{3/4}_{L^2}\Vert g'\Vert ^{1/4}_{L^2} \end{aligned}$$

for all g in \(W^{1,2}(\mathbb {R})\). For readers’ convenience, we provide a brief explanation below.

Proposition 3.4

Assume that the scaling relation (H.2c) holds.

(i) If \({\mathcal {E}}_H(\gamma )\) is finite then there exists \(\kappa >0\) such that for all g in \(W^{1,2}(\mathbb {R}^\ell )\)

$$\begin{aligned} \iint _{\mathbb {R}^\ell \times \mathbb {R}^\ell }\gamma (x-y)g^2(x)g^2(y)dxdy \le \kappa \left( \int _{\mathbb {R}^\ell }|g(x) |^2dx \right) ^{2-\frac{\alpha }{2}} \left( \int _{\mathbb {R}^\ell }|\nabla g(x) |^2dx \right) ^{\frac{\alpha }{2}} \,. \end{aligned}$$
(3.6)

In addition the constant \(\kappa \) can be chosen to be \(\kappa (\gamma )\) where

$$\begin{aligned} \kappa (\gamma ) :=\frac{2}{\alpha }\left( \frac{\alpha }{2- \alpha }{\mathcal {E}}_H(\gamma ) \right) ^{\frac{2- \alpha }{2}}\,. \end{aligned}$$
(3.7)

(ii) If (3.6) holds for some finite constant \(\kappa >0\), then \({\mathcal {E}}_H(\gamma )\) is finite and the best constant in (3.6) is \(\kappa (\gamma )\).

Proof

Recall that \({\mathcal {G}}\) is defined in (3.2).

(i) Let g be in \({\mathcal {G}}\). For each \(\theta >0\), the function \(x\mapsto g_ \theta (x):=\theta ^{\frac{\ell }{2}}{g(\theta x)}\) also belongs to \({\mathcal {G}}\). Hence,

$$\begin{aligned} \iint _{\mathbb {R}^\ell \times \mathbb {R}^\ell }\gamma (x-y)g_ \theta ^2(x)g_ \theta ^2(y)dxdy-\int _{\mathbb {R}^\ell }|\nabla g_ \theta (x)|^2 dx\le {\mathcal {E}}_H(\gamma )\,. \end{aligned}$$

Writing these integrals back to g and using (H.2c) yields

$$\begin{aligned} \theta ^ \alpha \iint _{\mathbb {R}^\ell \times \mathbb {R}^\ell }\gamma (x-y)g^2(x)g^2(y)dxdy- \theta ^2\int _{\mathbb {R}^\ell }|\nabla g(x)|^2 dx\le {\mathcal {E}}_H(\gamma ) \end{aligned}$$

for all \(\theta >0\). Optimizing the left-hand side (with respect to \(\theta \)) leads to

$$\begin{aligned}&\frac{2- \alpha }{\alpha }\left( \frac{\alpha }{2}\right) ^{\frac{2}{2- \alpha }}\left( \iint _{\mathbb {R}^\ell \times \mathbb {R}^\ell }\gamma (x-y)g^2(x)g^2(y)dxdy \right) ^{\frac{2}{2- \alpha }}\\&\quad \le {\mathcal {E}}_H(\gamma )\left( \int _{\mathbb {R}^\ell }|\nabla g(x)|^2 dx\right) ^{\frac{\alpha }{2- \alpha }}\,. \end{aligned}$$

Removing the normalization \(\Vert g\Vert _{L^2}=1\) and some algebraic manipulation yields the result.

(ii) Let \(\kappa _0\) be the best constant in (3.6). Then for every \(g\in {\mathcal {G}}\),

$$\begin{aligned}&\iint _{\mathbb {R}^\ell \times \mathbb {R}^\ell }\gamma (x-y)g^2(x)g^2(y)dxdy -\int _{\mathbb {R}^\ell }|\nabla g(x)|^2 dx \le \kappa _0 \Vert \nabla g\Vert ^{\alpha }_{L^2}-\Vert \nabla g\Vert ^{2}_{L^2}\\&\quad \le \sup _{\theta >0}\{\kappa _0 \theta ^ \alpha - \theta ^2 \} =\frac{2- \alpha }{\alpha }\left( \frac{\alpha }{2} \kappa _0\right) ^{\frac{2}{2- \alpha }}\,. \end{aligned}$$

This shows \({\mathcal {E}}_H(\gamma )\) is finite and at most \(\frac{2- \alpha }{\alpha }( \frac{\alpha }{2} \kappa _0)^{\frac{2}{2- \alpha }}\), which also means \(\kappa (\gamma )\le \kappa _0\). On the other hand, (i) already implies \(\kappa _0\le \kappa (\gamma )\), hence completes the proof. \(\square \)

Proof of Proposition 3.3

Reasoning as in Proposition 3.4, we see that \({\mathcal {M}}(\gamma )\) is finite if and only if (3.6) holds for some constant \(\kappa >0\). In addition, the best constant \(\kappa (\gamma )\) in (3.6) satisfies the relation

$$\begin{aligned} {\mathcal {M}}(\gamma )=\frac{4- \alpha }{4} \left( \frac{\alpha }{2}\right) ^{\frac{\alpha }{4- \alpha }}(\kappa (\gamma ))^{\frac{2}{4- \alpha }}\,. \end{aligned}$$

Together with (3.7), this yields the result. \(\square \)

The following result preludes the connection between \({\mathcal {E}}_H,{\mathcal {M}}\) with exponential functional of Brownian motions.

Lemma 3.5

Let \(\{B(s),s\ge 0\}\) be a Brownian motion in \(\mathbb {R}^n\) and D be a bounded open domain in \(\mathbb {R}^n\) containing 0. Let h(sx) be a bounded function defined on \([0,1]\times \mathbb {R}^n\) which is continuous in x and equicontinuous (over \(x\in \mathbb {R}^n\)) in s. Then

$$\begin{aligned}&\lim _{t\rightarrow \infty }\frac{1}{t}\log \mathbb {E}\left[ \exp \left\{ \int _0^t h\left( \frac{s}{t},B(s)-\frac{s}{t} B(t)\right) ds \right\} ;\tau _D\ge t\right] \nonumber \\&\quad =\int _0^1\sup _{g\in {\mathcal {G}}_D }\left\{ \int _{D} h(s,x) g^2(x)dx-\frac{1}{2}\int _{D}|\nabla g(x)|^2dx \right\} ds\,, \end{aligned}$$
(3.8)

where \({\mathcal {G}}_D\) is the class of functions g in \(W^{1,2}(\mathbb {R}^n)\) such that \(\int _{D}|g(x)|^2dx=1\) and \(\tau _D\) is the exit time \(\tau _D := \inf \{t\ge 0: B_t \notin D\}\).

Proof

Observe that the process \(\{B_{0,t}(s)=B(s)-\frac{s}{t} B(t)\}_{s\in [0,t]}\) is a Brownian bridge. An analogous result with Brownian bridge replaced by Brownian motion has been obtained in [6]. Our main idea here is to apply a change of measure to transfer the known result for Brownian motion to the result for Brownian bridge (i.e. the limit (3.8)). Since the probability density of Brownian bridge \(B_{0,t}\) with respect to a standard Brownian motion is singular near t, a truncation is needed. We fix \(\theta \in (0,1)\) and consider first the limit

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{1}{t}\log \mathbb {E}\left[ \exp \left\{ \int _0^{\theta t} h\left( \frac{s}{t},B_{0,t}(s)\right) ds \right\} ;\tau _D\ge t\right] \,. \end{aligned}$$

Let M be such that \(|x|\le M\) for all \(x\in D\). Using Girsanov theorem (see [16, Eq. (2.38)]), we can write

$$\begin{aligned}&\mathbb {E}\left[ \exp \left\{ \int _0^{\theta t} h\left( \frac{s}{t},B_{0,t}(s)\right) ds \right\} ;\tau _D\ge t\right] \nonumber \\&\quad =(1- \theta )^{-\frac{n}{2}}\mathbb {E}\left[ \exp \left\{ \int _0^{\theta t} h\left( \frac{s}{t},B(s)\right) ds-\frac{|B(\theta t)|^2}{2t(1- \theta )} \right\} ;\tau _D\ge t\right] \nonumber \\&\quad \ge (1- \theta )^{-\frac{n}{2}}\mathbb {E}\left[ \exp \left\{ \int _0^{\theta t} h\left( \frac{s}{t},B(s)\right) ds-\frac{M^2}{2t(1- \theta )} \right\} ;\tau _D\ge t\right] \,. \end{aligned}$$
(3.9)

The result of [6, Proposition 3.1] asserts that

$$\begin{aligned}&\lim _{t\rightarrow \infty }\frac{1}{t}\log \mathbb {E}\left[ \exp \left\{ \int _0^{\theta t} h\left( \frac{s}{t},B(s)\right) ds \right\} ;\tau _D\ge t\right] \nonumber \\&\quad =\int _0^ \theta \sup _{g\in {\mathcal {G}}_D}\left\{ \int _{D} h(s,x) g^2(x)dx-\frac{1}{2}\int _{D}|\nabla g(x)|^2dx \right\} ds\,. \end{aligned}$$
(3.10)

This leads to

$$\begin{aligned}&\liminf _{t\rightarrow \infty }\frac{1}{t}\log \mathbb {E}\left[ \exp \left\{ \int _0^{\theta t} h\left( \frac{s}{t},B_{0,t}(s)\right) ds \right\} ;\tau _D\ge t\right] \nonumber \\&\quad \ge \int _0^ \theta \sup _{g\in {\mathcal {G}}_D}\left\{ \int _{D} h(s,x) g^2(x)dx-\frac{1}{2}\int _{D}|\nabla g(x)|^2dx \right\} ds\,. \end{aligned}$$
(3.11)

In obtaining the above limit, we have used the trivial facts

$$\begin{aligned} \lim _{t\rightarrow \infty } \frac{1}{t}\log (1- \theta )^{-\frac{n}{2}}=\lim _{t\rightarrow \infty }\frac{1}{t}\log \exp \left\{ -\frac{M^2}{2t(1- \theta )} \right\} =0\,. \end{aligned}$$

Note that the singularity when \(\theta \uparrow 1\) has disappeared at this stage. On the other hand, the estimate

$$\begin{aligned} \left| \log \mathbb {E}\exp \left\{ \int _0^t h\left( \frac{s}{t},B_{0,t}(s)\right) ds \right\} -\log \mathbb {E}\exp \left\{ \int _0^{\theta t} h\left( \frac{s}{t},B_{0,t}(s)\right) ds \right\} \right| \le (1- \theta )t\Vert h\Vert _\infty \end{aligned}$$

implies that

$$\begin{aligned}&\lim _{\theta \uparrow 1}\limsup _{t\rightarrow \infty }\left| \frac{1}{t}\log \mathbb {E}\exp \left\{ \int _0^t h\left( \frac{s}{t},B_{0,t}(s)\right) ds \right\} \right. \\&\quad \left. -\frac{1}{t}\log \mathbb {E}\exp \left\{ \int _0^{\theta t} h\left( \frac{s}{t},B_{0,t}(s)\right) ds \right\} \right| =0\,. \end{aligned}$$

Hence, we can send \(\theta \uparrow 1\) in (3.11) to obtain the lower bound for (3.8). The upper bound for (3.8) is proved analogously. Indeed, from (3.9), we have

$$\begin{aligned}&\mathbb {E}\left[ \exp \left\{ \int _0^{\theta t} h\left( \frac{s}{t},B_{0,t}(s)\right) ds \right\} ;\tau _D\ge t\right] \\&\quad \le (1- \theta )^{-\frac{n}{2}}\mathbb {E}\left[ \exp \left\{ \int _0^{\theta t} h\left( \frac{s}{t},B(s)\right) ds+\frac{M^2}{2t(1- \theta )} \right\} ;\tau _D\ge t\right] \,, \end{aligned}$$

which when combined with (3.10) yields

$$\begin{aligned}&\limsup _{t\rightarrow \infty }\frac{1}{t}\log \mathbb {E}\left[ \exp \left\{ \int _0^{\theta t} h\left( \frac{s}{t},B_{0,t}(s)\right) ds \right\} ;\tau _D\ge t\right] \\&\quad \le \int _0^ \theta \sup _{g\in {\mathcal {G}}_D}\left\{ \int _{D} h(s,x) g^2(x)dx -\frac{1}{2}\int _{D}|\nabla g(x)|^2dx \right\} ds\,. \end{aligned}$$

Since the singularity when \(\theta \uparrow 1\) has been eliminated in the regime \(t\rightarrow \infty \), we can send \(\theta \uparrow 1\) as previously to obtain the upper bound for (3.8). \(\square \)

We conclude this section with an observation: (H.2c) induces the following scaling relation on \({\mathcal {E}}_H(\gamma )\)

$$\begin{aligned} {\mathcal {E}}_H(\lambda \gamma )=\lambda ^{\frac{2}{2- \alpha }}{\mathcal {E}}_H(\gamma )\quad \text{ for } \text{ all }\quad \lambda >0\,. \end{aligned}$$
(3.12)

4 Feynman–Kac formulas and functionals of Brownian Bridges

We derive Feynman–Kac formulas for the moments \(\mathbb {E}u^m(t,x)\) for integers \(m\ge 2\). These formulas play important roles in proving upper and lower bounds of (1.15) and (1.26).

To discuss our contributions in the current section, let us assume for the moment that \(\dot{W}\) is a space-time white noise and \(\ell =1\). The most well-known Feynman–Kac formula for second moment is

$$\begin{aligned} \mathbb {E}[(u(t,x))^2]=\mathbb {E}\left( \prod _{j=1}^2 u_0(B^j(t)+x)\exp \left\{ \int _0^t \delta (B^1(s)-B^2(s))ds \right\} \right) \,, \end{aligned}$$

where \(B^1,B^2\) are two independent Brownian motions starting at 0. If \(u_0\) is merely a measure, some efforts are needed to make sense of \(u_0(B(t)+x)\), which appears on the right-hand side above. An attempt is carried out in [4] using Meyer-Watanabe’s theory of Wiener distributions.

The Feynman–Kac formulas presented here (see (4.10) below) have appeared in [16]. However, there seems to have a minor gap in that article. Namely, Eq. (4.52) there has not been proven if \(u_0\) is a measure. In the current article, we take the chance to fill this gap. Our approach is in the same spirit as [16] and is different from [4]. In particular, we do not make use of Wiener distributions.

Proposition 4.1

Let \(u_0\) be a measure satisfying (1.13). Then

$$\begin{aligned} u(t,x)=\int _{\mathbb {R}^\ell }\mathcal {Z}(z;t,x)u_0(dz)\,. \end{aligned}$$
(4.1)

In addition, if (H.1) holds, then

$$\begin{aligned} \frac{\mathcal {Z}(z; t,x)}{p_t(z-x)} = \mathbb {E}_{B} \exp \left\{ \int _0^t \int _{\mathbb {R}^\ell } \delta \left( B_{0,t}(t-s)+ \frac{t-s}{t}z + \frac{s}{t} x -y \right) W(ds,dy) - \frac{t}{2}\gamma (0)\right\} \,. \end{aligned}$$
(4.2)

Proof

Let v(tx) be the integral on the right-hand side of (4.1). From (2.7), integrating z with respect to \(u_0(dz)\) and applying the stochastic Fubini theorem (cf. [10, Theorem 4.33]), we have

$$\begin{aligned} v(t,x)= & {} \int _{\mathbb {R}^\ell }p_t(x-z)u_0(dz) \!+\!\int _{\mathbb {R}^\ell }\int _0^t\int _{\mathbb {R}^\ell }p_{t-s}(x-y)\mathcal {Z}(z;s,y)W(ds,dy)u_0(dz)\\= & {} p_t*u_0(x) +\int _0^t\int _{\mathbb {R}^\ell }p_{t-s}(x-y)v(s,y)W(ds,dy)\,. \end{aligned}$$

Hence, v is a solution of (1.1) with initial datum \(u_0\). By unicity, Theorem 2.3, we see that \(u=v\) and (4.1) follows.

Next, we show (4.2) assuming (H.1). Fix \(t>0\) and \(x\in \mathbb {R}^\ell \). For every \(u_0\in C_c^\infty (\mathbb {R}^\ell )\), the following Feynman–Kac formula (see [17, Prop. 5.2] for a general case) holds

$$\begin{aligned} u(t,x)=\mathbb {E}_B u_0(B(t)+x)\exp \left\{ \int _0^t\int _{\mathbb {R}^\ell } \delta (B(t-s)+x-y)W(ds,dy)-\frac{t}{2} \gamma (0) \right\} \,. \end{aligned}$$

Using the decomposition (2.8) and the fact that \(B_{0,t}\) and B(t) are independent, we see that

$$\begin{aligned} u(t,x) =\int _{\mathbb {R}^\ell }Y(z;t,x) p_t(z) u_0(z+x) dz \end{aligned}$$
(4.3)

where

$$\begin{aligned} Y(z;t,x)&=\mathbb {E}_B \exp \left\{ \int _0^t\int _{\mathbb {R}^\ell } \delta (B_{0,t}(t-s)+\frac{t-s}{t} z+x-y)W(ds,dy)-\frac{t}{2} \gamma (0) \right\} \\&=\mathbb {E}_B \exp \left\{ V_{t,x}(z)\right\} \,. \end{aligned}$$

Together with (4.1) we obtain

$$\begin{aligned} \int _{\mathbb {R}^\ell }\mathcal {Z}(z;t,x)u_0(z)dz=\int _{\mathbb {R}^\ell }Y(z-x;t,x) p_t(z-x) u_0(z) dz \end{aligned}$$

for all \(u_0\in C^\infty _c(\mathbb {R}^\ell )\).

Next we show that \(z\mapsto Y(z;t,x)\) is continuous. Fix \(p>2\). From the elementary relation \(|e^x-e^y| \le (e^x + e^y)|x-y|\) and the Cauchy–Schwarz inequality, it follows

$$\begin{aligned}&\mathbb {E}\left| Y(z; t,x) - Y(z'; t,x) \right| ^p\\&\quad \le \left( \mathbb {E}_W \left( \mathbb {E}_B\left[ e^{V_{t,x}(z)}+e^{V_{t,x}(z')}\right] ^2 \right) ^p\right) ^{1/2} \left( \mathbb {E}_W \left( \mathbb {E}_B|V_{t,x}(z)-V_{t,x}(z')|^2\right) ^p\right) ^{1/2}\,. \end{aligned}$$

Since \(\gamma \) is bounded, conditioned on \(B_{0,t}\), \(V_{t,x}(z)\) is a normal random variable with uniformly (in xz) bounded variance. It follows that \(V_{t,x}(z)\) has uniformly bounded exponential moments. That is,

$$\begin{aligned} \sup _{z,x\in \mathbb {R}^\ell }\mathbb {E}e^{2pV_{t,x}(z)}\le C_{p,t} \end{aligned}$$

for some constant \(C_{p,t}\). We now resort to Minkowski inequality, our exponential bound for \(V_{t,x}(z)\) and the relation between \(L^p\) and \(L^2\) moments for Gaussian random variables in order to obtain

$$\begin{aligned} \mathbb {E}\left| Y(z; t,x) - Y(z'; t,x) \right| ^p \le C_{p,t} \left( \mathbb {E}|V_{t,x}(z)-V_{t,x}(z')|^2 \right) ^{p/2} \,. \end{aligned}$$

In addition, under (H.1), \(\gamma \) is Hölder continuous with order \(\kappa >0\) at 0, it follows that

$$\begin{aligned}&\mathbb {E}|V_{t,x}(z)-V_{t,x}(z')|^2\\&\quad = \mathbb {E}\bigg (\int _0^t\int _{\mathbb {R}^\ell }\delta (B_{0,t}(t-s)+\frac{t-s}{t} z+x-y)W(ds,dy)\\&\qquad - \int _0^t\int _{\mathbb {R}^\ell }\delta (B_{0,t}(t-s)+\frac{t-s}{t} z'+x-y)W(ds,dy) \bigg )^2\\&\quad = \int _0^t \left( \gamma (0) - \gamma \Big ( \frac{t-s}{t} (z-z')\Big ) \right) ds \lesssim t |z-z'|^{\kappa }\,. \end{aligned}$$

We have shown

$$\begin{aligned} \mathbb {E}\left| Y(z; t,x) - Y(z'; t,x) \right| ^p \lesssim |z-z'|^{p \kappa } \,. \end{aligned}$$

Thus, the process \(z\rightarrow Y(z;t,x)\) has a continuous version. On the other hand, \(z\mapsto \mathcal {Z}(z;t,x)\) is also continuous (see Proposition 5.5 below). It follows that \(\mathcal {Z}(z;t,x)=Y(z-x;t,x) p_t(z-x)\), which is exactly (4.2). \(\square \)

Proposition 4.2

Assuming (H.1), we have

$$\begin{aligned}&\mathbb {E}\left[ \prod _{j=1}^m \frac{\mathcal {Z}(z_j;t,x_j)}{p_t(x_j-z_j)} \right] \nonumber \\&\quad = \mathbb {E}\exp \left\{ \int _0^t\sum _{1\le j<k\le m} \gamma \left( B_{0,t}^j(s)-B_{0,t}^k(s)+\frac{s}{t}(z_j-z_k) \!+\!\frac{t-s}{t}(x_j-x_k)\right) ds\right\} \,.\nonumber \\ \end{aligned}$$
(4.4)

and

$$\begin{aligned} \mathbb {E}\left[ \prod _{j=1}^m \frac{\mathcal {Z}(z_j;t,x_j)}{p_t(x_j-z_j)} \right] \le \mathbb {E}\exp \left\{ \int _0^t\sum _{1\le j<k\le m} \gamma \left( B_{0,t}^j(s)-B_{0,t}^k(s)\right) ds\right\} \,. \end{aligned}$$
(4.5)

Proof

We observe that conditioned on B,

$$\begin{aligned} V(z,x):= \int _0^t\int _{\mathbb {R}^\ell }\delta \left( B_{0,t}(t-s)+\frac{t-s}{t}z+\frac{s}{t} x-y\right) W(ds,dy) \end{aligned}$$

is a normal random variable with mean zero. In addition, for every \(x,x',z,z'\in \mathbb {R}^\ell \), applying (1.23), we have

$$\begin{aligned}&\mathbb {E}\left[ V(B^j,z,x)V(B^k,z',x') \Big |B^j,B^k\right] \nonumber \\&\quad = \int _0^t \gamma \left( B_{0,t}^j(s)-B_{0,t}^k(s)+\frac{s}{t}(z-z')+\frac{t-s}{t}(x-x') \right) ds\,. \end{aligned}$$
(4.6)

For every \((x_1,\dots ,x_m)\in (\mathbb {R}^\ell )^m\), using (4.2) and (4.6), we have

$$\begin{aligned}&\mathbb {E}\left[ \prod _{j=1}^m \frac{\mathcal {Z}(z_j;t,x_j)}{p_t(x_j-z_j)} \right] \nonumber \\&\quad = \mathbb {E}\exp \left\{ \int _0^t\sum _{1\le j<k\le m} \gamma \left( B_{0,t}^j(s)-B_{0,t}^k(s)+\frac{s}{t}(z_j-z_k) +\frac{t-s}{t}(x_j-x_k)\right) ds\right\} \nonumber \\ \end{aligned}$$
(4.7)

Note that in the exponent above, the diagonal terms (with \(j=k\)) are removed because there are cancellations with the normalization factor \(-\frac{t}{2} \gamma (0)\) in (4.2), which occur after taking expectation with respect to W. Finally, apply [16, Lemma 4.1], we obtain (4.5) from (4.7). \(\square \)

To extend the previous result to noises satisfying (H.2), we need the following result.

Proposition 4.3

Assuming (H.2). There exists a constant c depending only on \(\alpha \) such that for any \(\beta \in (0, 4\wedge (\ell -\alpha ))\),

$$\begin{aligned} \left\| \frac{\mathcal {Z}_{\varepsilon }(x_0;t,x)}{p_t(x-x_0)} - \frac{\mathcal {Z}(x_0;t,x)}{p_t(x-x_0)} \right\| _{L^m(\Omega )}\lesssim \varepsilon ^{\frac{\beta }{4}}t^{\frac{2- \alpha - \beta }{4}}\sqrt{m} \Theta _t^{\frac{1}{m}}(m)e^{cm^{\frac{2}{2- \alpha }}t}\quad \text{ for } \text{ all }\quad t\ge 0 \end{aligned}$$
(4.8)

where \(\mathcal {Z}_\varepsilon \) is the solution to (2.7) with W replaced by \(W_\varepsilon \) and \(\Theta _t(m)\) is defined in (2.10)

Proof

Let us put

$$\begin{aligned} M_s=\sup _{y\in \mathbb {R}^\ell }\frac{\Vert \mathcal {Z}(x_0;s,y)-\mathcal {Z}_ \varepsilon (x_0;s,y)\Vert _{L^m(\Omega )}}{p_t(y-x_0)}\,. \end{aligned}$$

From (2.7), we have

$$\begin{aligned} \frac{\mathcal {Z}(x_0;t,x)}{p_t(x-x_0)}&= 1 + \int _0^t \int _{\mathbb {R}^\ell } \frac{p_{t-s}(x-y) p_s(y-x_0)}{p_t(x-x_0)} \frac{\mathcal {Z}(x_0;s,y)}{p_s(y-x_0)} W(ds,dy)\\&= 1+ \int _0^t \int _{\mathbb {R}^\ell } p_{\frac{s(t-s)}{t}} (y-x_0 - \frac{s}{t} (x-x_0)) \frac{\mathcal {Z}(x_0;s,y)}{p_s(y-x_0)} W(ds,dy)\,, \end{aligned}$$

Then we obtain

$$\begin{aligned}&\left\| \frac{\mathcal {Z}(x_0;t,x)}{p_t(x-x_0)} - \frac{\mathcal {Z}_{\varepsilon }(x_0;t,x)}{ p_t(x-x_0)} \right\| _{L^m(\Omega )} \\&\quad \le \left\| \int _0^t \int _{\mathbb {R}^\ell } p_{\frac{s(t-s)}{t}} (y-x_0-\frac{s}{t} (x-x_0)) \frac{\mathcal {Z}(x_0;s,y) - \mathcal {Z}_{\varepsilon }(x_0;s,y)}{p_s(y-x_0)} W(ds,dy) \right\| _{L^m(\Omega )}\\&\qquad + \left\| \int _0^t \int _{\mathbb {R}^\ell } p_{\frac{s(t-s)}{t}} (y-x_0-\frac{s}{t} (x-x_0)) \frac{\mathcal {Z}_{\varepsilon }(x_0;s,y)}{p_s(y-x_0)} \left[ W(ds,dy) - W_{\varepsilon }(ds,dy)\right] \right\| _{L^m(\Omega )}\\&\quad := I_1 + I_2\,. \end{aligned}$$

To estimate \(I_1\), we use Lemma 2.1 and (H.2c) to obtain

$$\begin{aligned} I_1&\lesssim \sqrt{m} \left( \int _0^t \int _{\mathbb {R}^\ell } e^{-\frac{2s(t-s)}{t} |\xi |^2} \mu (\xi )d\xi M^2_s ds \right) ^{1/2}\\&\lesssim \sqrt{m} \left( \int _0^t \left( \frac{s(t-s)}{t}\right) ^{-\frac{\alpha }{2}} M^2_s ds \right) ^{1/2}\,. \end{aligned}$$

To estimate \(I_2\), we first note that the noise \(W-W_ \varepsilon \) has spectral density \((1- e^{-\varepsilon |\xi |^2})^2\mu (\xi )\). Applying Lemma 2.1, we obtain

$$\begin{aligned} I_2\lesssim \sqrt{m} \sup _{s\le t,y\in \mathbb {R}^\ell }\left\| \frac{\mathcal {Z}_\varepsilon (x_0;s,y)}{p_s(y-x_0)} \right\| _{L^m(\Omega )} \left( \int _0^t \int _{\mathbb {R}^\ell } e^{-\frac{2s(t-s)}{t} |\xi |^2}(1-e^{-\varepsilon |\xi |^2})^2 \mu (\xi )d\xi ds \right) ^{1/2}. \end{aligned}$$

Let us fix \(\beta \in (0,4\wedge (\ell - \alpha ))\). Applying the elementary inequality \(1-e^{-\varepsilon |\xi |^2}\le \varepsilon ^{\beta /4}|\xi |^{\beta /2}\) together with the estimate

$$\begin{aligned} \int _0^t\int _{\mathbb {R}^\ell } e^{-\frac{2s(t-s)}{t} |\xi |^2}|\xi |^{\beta } \mu (\xi )d\xi ds\lesssim \int _0^t\left( \frac{s(t-s)}{t} \right) ^{-\frac{\alpha +\beta }{2}}ds\lesssim t^{\frac{2- \alpha - \beta }{2}}\,, \end{aligned}$$

we get

$$\begin{aligned} I_2\lesssim \varepsilon ^{\frac{\beta }{4}} t^{\frac{2- \alpha -\beta }{4}}\sqrt{m} \sup _{s\le t,y\in \mathbb {R}^\ell }\left\| \frac{\mathcal {Z}_\varepsilon (x_0;s,y)}{p_s(y-x_0)} \right\| _{L^m(\Omega )}\,. \end{aligned}$$

Reasoning as in [16, Lemma 4.1], we see that

$$\begin{aligned}&\mathbb {E}_B\exp \left\{ \sum _{1\le j<k\le m}\int _0^t \gamma _\varepsilon (B^j_{0,t}(s)-B^k_{0,t}(s))ds \right\} \\&\quad \le \mathbb {E}_B\exp \left\{ \sum _{1\le j<k\le m}\int _0^t \gamma (B^j_{0,t}(s)-B^k_{0,t}(s))ds \right\} \,. \end{aligned}$$

Two key observations here are \(\gamma _ \varepsilon ,\gamma \) have spectral measures \(\mu (\xi ),e^{-\varepsilon |\xi |^2}\mu (\xi )\) respectively and \(e^{-\varepsilon |\xi |^2}\mu (\xi )\le \mu (\xi )\). Hence, it follows from (4.5) and the previous estimate that

$$\begin{aligned} \sup _{s\le t,y\in \mathbb {R}^\ell }\left\| \frac{\mathcal {Z}_\varepsilon (x_0;s,y)}{p_s(y-x_0)} \right\| _{L^m(\Omega )}\le \Theta _t^{\frac{1}{m}}(m)\,. \end{aligned}$$

In summary, we have shown

$$\begin{aligned} M_t\lesssim \sqrt{m}\left( \int _0^t\left( \frac{s(t-s)}{t} \right) ^{-\frac{\alpha }{2}}M_s^2 ds \right) ^{\frac{1}{2}}+\varepsilon ^{\frac{\beta }{4}}t^{\frac{2- \alpha -\beta }{4}}\sqrt{m} \Theta _t^{\frac{1}{m}}(m)\,. \end{aligned}$$

Applying Lemma 2.4, this yields

$$\begin{aligned} M_t\lesssim \varepsilon ^{\frac{\beta }{4}}t^{\frac{2- \alpha -\beta }{4}}\sqrt{m} \Theta _t^{\frac{1}{m}}(m)e^{cm^{\frac{2}{2- \alpha }}t}\quad \text{ for } \text{ all }\quad t\ge 0\,, \end{aligned}$$
(4.9)

for some constant c depending only on \(\alpha \). \(\square \)

We are now ready to derive Feynman–Kac formulas for positive moments.

Proposition 4.4

Let \(u_0\) be a measure satisfying (1.13). Under (H.1) or (H.2), for every \(x_1,\dots ,x_m\in \mathbb {R}^\ell \), we have

$$\begin{aligned} \mathbb {E}\left[ \prod _{j=1}^m u(t,x_j)\right]&=\int _{(\mathbb {R}^\ell )^m} \mathbb {E}\exp \left\{ \int _0^t\sum _{1\le j<k\le m} \gamma \left( B_{0,t}^j(s)-B_{0,t}^k(s)\right. \right. \nonumber \\&\qquad \left. \left. +x_j-x_k+\frac{s}{t}(y_j-y_k)\right) ds\right\} \nonumber \\&\qquad \times \prod _{j=1}^m [p_t(y_j)u_{0}(x_j+dy_j)]\,. \end{aligned}$$
(4.10)

and

$$\begin{aligned} \mathbb {E}\left[ \prod _{j=1}^m \frac{u(t,x_j)}{p_t*|u_0|(x_j)} \right] \le \mathbb {E}\exp \left\{ \int _0^t\sum _{1\le j<k\le m} \gamma \left( B_{0,t}^j(s)-B_{0,t}^k(s)\right) ds\right\} \,. \end{aligned}$$
(4.11)

Proof

We prove the result under the hypothesis (H.2). The proof under hypothesis (H.1) is easier and omitted.

Step 1: we first consider (4.10) and (4.11) when the initial data are Dirac masses. More precisely, we will show that

$$\begin{aligned}&\mathbb {E}\left[ \prod _{j=1}^m \frac{\mathcal {Z}(z_j;t,x_j)}{p_t(x_j-z_j)} \right] \nonumber \\&\quad = \mathbb {E}\exp \left\{ \int _0^t\sum _{1\le j<k\le m} \gamma \left( B_{0,t}^j(s)-B_{0,t}^k(s)+\frac{s}{t}(z_j-z_k) +\frac{t-s}{t}(x_j-x_k)\right) ds\right\} \,,\nonumber \\ \end{aligned}$$
(4.12)

and

$$\begin{aligned} \mathbb {E}\left[ \prod _{j=1}^m \frac{\mathcal {Z}(z_j;t,x_j)}{p_t(x_j-z_j)} \right] \le \mathbb {E}\exp \left\{ \int _0^t\sum _{1\le j<k\le m} \gamma \left( B_{0,t}^j(s)-B_{0,t}^k(s)\right) ds\right\} \,. \end{aligned}$$
(4.13)

Fix \(\varepsilon >0\), identity (4.12) with \(\mathcal {Z},\gamma \) replaced by \(\mathcal {Z}_\varepsilon ,\gamma _\varepsilon \) has been obtained in (4.4). Namely, we have

$$\begin{aligned}&\mathbb {E}\left[ \prod _{j=1}^m \frac{\mathcal {Z}_ \varepsilon (z_j;t,x_j)}{p_t(x_j-z_j)} \right] \nonumber \\&\quad = \mathbb {E}\exp \left\{ \int _0^t\sum _{1\le j<k\le m} \gamma _ \varepsilon \left( B_{0,t}^j(s)-B_{0,t}^k(s)+\frac{s}{t}(z_j-z_k) +\frac{t-s}{t}(x_j-x_k)\right) ds\right\} \nonumber \\ \end{aligned}$$
(4.14)

Using analogous arguments to [16, Proposition 4.2], we can show that for every \(\kappa \in \mathbb {R}\), as \(\varepsilon \downarrow 0\), the functions

$$\begin{aligned}&(x_1,z_1,\dots ,x_m,z_m)\mapsto \\&\quad \mathbb {E}\exp \left\{ \kappa \int _0^t \sum _{1\le j<k\le m}\gamma _ \varepsilon \left( B_{0,t}^j(s)\!-\!B_{0,t}^k(s)\!+\!\frac{s}{t}(z_j-z_k) \!+\!\frac{t-s}{t}(x_j-x_k)\right) ds \right\} \end{aligned}$$

converge uniformly on \(\mathbb {R}^{2m\ell }\) to the function

$$\begin{aligned}&(x_1,z_1,\dots ,x_m,z_m)\mapsto \\&\quad \mathbb {E}\exp \left\{ \kappa \int _0^t \sum _{1\le j<k\le m}\gamma \left( B_{0,t}^j(s)-B_{0,t}^k(s)+\frac{s}{t}(z_j-z_k) +\frac{t-s}{t}(x_j-x_k)\right) ds \right\} \,. \end{aligned}$$

In addition, in view of Proposition 4.3,

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}\mathbb {E}\left[ \prod _{j=1}^m \frac{\mathcal {Z}_ \varepsilon (z_j;t,x_j)}{p_t(x_j-z_j)} \right] =\mathbb {E}\left[ \prod _{j=1}^m \frac{\mathcal {Z}(z_j;t,x_j)}{p_t(x_j-z_j)} \right] \,. \end{aligned}$$

Sending \(\varepsilon \downarrow 0\) in (4.14), we obtain (4.12). The estimate (4.13) is obtained analogously using (4.5). We omit the details.

Step 2: For general initial data satisfying (1.13), we note that from (4.1),

$$\begin{aligned} \prod _{j=1}^m u(t,x_j)=\int _{(\mathbb {R}^\ell )^m} \prod _{j=1}^m [\mathcal {Z}(z_j;t,x_j) u_{0}(dz_j)]\,. \end{aligned}$$

From here, it is evident that (4.10), (4.11) are consequences of (4.12), (4.13) and Fubini’s theorem. \(\square \)

We conclude this section with the following observation.

Remark 4.5

Under (H.1), it is evident from (4.2) that \(\mathcal {Z}(z;t,x) \) is non-negative for every ztx. Under (H.2), thanks to Proposition 4.3, \(\mathcal {Z}(z;t,x)\) is the limit of non-negative random variables, hence \(\mathcal {Z}(z;t,x) \) is also non-negative for every ztx. Furthermore, in view of (4.1), if \(u_0\) is non-negative then u(tx) is non-negative for every tx.

5 Moment asymptotic and regularity

5.1 Moment asymptotic

We begin with a study on high moments. Under hypothesis (H.1), the high moment asymptotic is governed by the value of \(\gamma \) at the origin.

Proposition 5.1

Under (H.1), for every \(T>0\), we have

$$\begin{aligned} \limsup _{m \rightarrow \infty } m^{-2} \log \sup _{0<t\le T} \sup _{x\in \mathbb {R}} \mathbb {E}\left( \frac{\mathcal {Z}(x_0;t,x)}{p_t(x-x_0)}\right) ^m \le \frac{T}{2}\gamma (0)\,. \end{aligned}$$
(5.1)

Proof

Since \(\gamma \) is positive definite, \(\gamma (x)\le \gamma (0)\) for all \(x \in \mathbb {R}^{\ell }\). It follows from (4.11) that

$$\begin{aligned} \mathbb {E}\left( \frac{\mathcal {Z}(x_0;t,x)}{p_t(x-x_0)}\right) ^m \le \exp \left( \frac{m(m-1)}{2} t \gamma (0) \right) \,. \end{aligned}$$

This immediately yields (5.1). \(\square \)

The following intermediate result will be applied to the measure \(e^{-2 \varepsilon |\xi |^2} \mu (d \xi )\) to obtain moment asymptotic under (H.2).

Lemma 5.2

Suppose that \(\mu (\mathbb {R}^\ell )<\infty \). For each tTm, we put \(t_m = m^{\frac{2}{2-\alpha }}t\) and \(T_m = m^{\frac{2}{2-\alpha }}T\). Then

$$\begin{aligned} \limsup _{m \rightarrow \infty } \frac{1}{mT_m} \log \sup _{0 \le t \le T} \mathbb {E}\exp \left( \frac{1}{m} \sum _{1 \le j < k \le m} \int _0^{t_m} \gamma \left( B_{0, t_m}^j(s)-B_{0, t_m}^k(s) \right) ds \right) \le \frac{1}{2} \mathcal {E}_H(\gamma )\,. \end{aligned}$$
(5.2)

Proof

The condition \(\mu (\mathbb {R}^\ell )<\infty \) implies that the inverse Fourier transform of \(\mu (\xi )\) exists and is a bounded continuous function \(\gamma \). Furthermore, \(\max _{x\in \mathbb {R}^\ell }\gamma (x)=\gamma (0)\). For each \(\lambda \in (0,1)\), we note that

$$\begin{aligned}&\mathbb {E}\exp \left\{ \frac{1}{m} \sum _{1 \le j< k \le m} \int _0^{t_m} \gamma \left( B_{0, t_m}^j(s)-B_{0, t_m}^k(s) \right) ds \right\} \\&\quad \le e^{\frac{(m-1)t_m}{2}\gamma (0)(1- \lambda )} \mathbb {E}\exp \left\{ \frac{1}{m} \sum _{1 \le j < k \le m} \int _0^{\lambda t_m} \gamma \left( B_{0, t_m}^j(s)-B_{0, t_m}^k(s) \right) ds \right\} \,. \end{aligned}$$

Using (2.9), we see that the expectation above is at most

$$\begin{aligned} (1- \lambda )^{-\frac{m\ell }{2}}\mathbb {E}\exp \left\{ \frac{1}{m}\sum _{1\le j<k\le m}\int _0^{\lambda t_m}\gamma (B^j(s)-B^k(s))ds\right\} \,. \end{aligned}$$

In addition, reasoning as in [16, Lemma 4.1], we see that

$$\begin{aligned}&\sup _{0 \le t \le T}\mathbb {E}\exp \left\{ \frac{1}{m} \sum _{1 \le j< k \le m} \int _0^{\lambda t_m} \gamma \left( B^j(s)-B^k(s) \right) ds \right\} \\&\quad =\mathbb {E}\exp \left\{ \frac{1}{m} \sum _{1 \le j < k \le m} \int _0^{\lambda T_m} \gamma \left( B^j(s)-B^k(s) \right) ds \right\} \,. \end{aligned}$$

It follows that

$$\begin{aligned}&\limsup _{m\rightarrow \infty }\frac{1}{mT_m} \log \sup _{0 \le t \le T} \mathbb {E}\exp \left\{ \frac{1}{m} \sum _{1 \le j< k \le m} \int _0^{t_m} \gamma \left( B_{0, t_m}^j(s)-B_{0, t_m}^k(s) \right) ds \right\} \\&\quad \le \frac{1- \lambda }{2}\gamma (0)+ \limsup _{m\rightarrow \infty }\frac{1}{mT_m}\mathbb {E}\exp \left\{ \frac{1}{m} \sum _{1 \le j < k \le m} \int _0^{\lambda T_m} \gamma \left( B^j(s)-B^k(s) \right) ds \right\} \!, \end{aligned}$$

where we have used the fact that \(\lim _{m\rightarrow \infty }\frac{1}{mT_m}\log (1- \lambda )^{-\frac{m\ell }{2}}=0\). Applying [8, Theorem 1.1], we get

$$\begin{aligned} \limsup _{m\rightarrow \infty }\frac{1}{m \lambda T_m}\mathbb {E}\exp \left\{ \frac{1}{m} \sum _{1 \le j < k \le m} \int _0^{\lambda T_m} \gamma \left( B^j(s)-B^k(s) \right) ds \right\} \le \frac{1}{2} \mathcal {E}_H(\gamma )\,. \end{aligned}$$

Thus we have shown

$$\begin{aligned}&\limsup _{m \rightarrow \infty } \frac{1}{mT_m} \log \sup _{0 \le t \le T} \mathbb {E}\exp \left( \frac{1}{m} \sum _{1 \le j < k \le m} \int _0^{t_m} \gamma \left( B_{0, t_m}^j-B_{0, t_m}^k(s) \right) ds \right) \\&\quad \le \frac{\lambda }{2} \mathcal {E}_H(\gamma ) + \frac{1- \lambda }{2}\gamma (0)\,. \end{aligned}$$

Finally, we send \(\lambda \rightarrow 1^-\) to finish the proof. \(\square \)

Proposition 5.3

Assuming (H.2), for every fixed \(T>0\),

$$\begin{aligned} \lim _{m\rightarrow \infty }m^{-\frac{4-\alpha }{2-\alpha }}\log \sup _{0<t\le T}\sup _{x\in \mathbb {R}^\ell }\mathbb {E}\left( \frac{\mathcal {Z}(x_0;t,x)}{p_t(x-x_0)}\right) ^m \le \frac{T}{2}{\mathcal {E}}_H(\gamma ) \end{aligned}$$
(5.3)

where \({\mathcal {E}}_H(\gamma )\) is the Hartree energy defined in (3.1).

Proof

Applying inequality (4.11), we have

$$\begin{aligned} \sup _{x\in \mathbb {R}^\ell }\mathbb {E}\left( \frac{\mathcal {Z}(x_0;t,x)}{p_t(x-x_0)}\right) ^m\le \mathbb {E}\exp \left\{ \int _0^t\sum _{1\le j<k\le m}\gamma (B_{0,t}^j(s)-B_{0,t}^k(s))ds \right\} \,. \end{aligned}$$

In addition, by the change of variable \(s \rightarrow s m^{-\frac{2}{2- \alpha }}\) and the scaling property of Brownian bridge, \(\{B_{0,\lambda t}(\lambda s),s\in [0, t]\} {\mathop {=}\limits ^{\text{ law }}} \{\sqrt{\lambda } B_{0,t}(s),s\in [0,t]\}\), the right hand side in the above expression is the same as

$$\begin{aligned} \mathbb {E}\exp \left\{ \frac{1}{m}\int _0^{m^{\frac{2}{2-\alpha }}t}\sum _{1\le j<k\le m}\gamma \left( B_{0,m^{\frac{2}{2-\alpha }}t}^j(s)-B_{0,m^{\frac{2}{2-\alpha }}t}^k(s)\right) ds \right\} \,. \end{aligned}$$

Hence, denoting \(t_m=m^{\frac{2}{2- \alpha }}t\) and \(T_m=m^{\frac{2}{2- \alpha }}T\), we see that (5.3) is equivalent to the statement

$$\begin{aligned}&\limsup _{m\rightarrow \infty }\frac{1}{mT_m} \log \sup _{0<t_m\le T_m}\mathbb {E}\exp \left\{ \frac{1}{m}\int _0^{t_m}\sum _{1\le j<k\le m}\gamma (B_{0,t_m}^j(s)-B_{0,t_m}^k(s))ds \right\} \nonumber \\&\quad \le \frac{1}{2} {\mathcal {E}}_H(\gamma )\,. \end{aligned}$$
(5.4)

Let \(p,q>1\) such that \(p^{-1}+q^{-1}=1\). By Hölder inequality

$$\begin{aligned} \mathbb {E}\exp \left\{ \frac{1}{m}\int _0^{t_m}\sum _{1\le j<k\le m}\gamma (B_{0,t_m}^j(s)-B_{0,t_m}^k(s))ds \right\} \le \mathcal {A}^{\frac{1}{p}}\mathcal {B}^{\frac{1}{q}} \end{aligned}$$

where

$$\begin{aligned} \mathcal {A}&=\sup _{0<t_m\le T_m} \mathbb {E}\exp \left\{ \frac{p}{m}\int _0^{t_m}\sum _{1\le j<k\le m}\gamma _ \varepsilon (B_{0,t_m}^j(s)-B_{0,t_m}^k(s))ds \right\} \\ \mathcal {B}&=\sup _{0<t_m\le T_m}\mathbb {E}\exp \left\{ \frac{q}{m}\int _0^{t_m}\sum _{1\le j<k\le m}(\gamma - \gamma _\varepsilon )(B_{0,t_m}^j(s)-B_{0,t_m}^k(s))ds \right\} \,. \end{aligned}$$

From Lemma 5.2 and the fact that \({\mathcal {E}}_H(\gamma _\varepsilon ) \le {\mathcal {E}}_H(\gamma )\) (see (3.3)), we have

$$\begin{aligned} \lim _{p\rightarrow 1^+}\limsup _{m\rightarrow \infty }\frac{1}{mT_m}\log \mathcal {A}\le \frac{1}{2}{\mathcal {E}}_H(\gamma )\,. \end{aligned}$$

Hence, it suffices to show for every fixed \(q>1\),

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}\limsup _{m\rightarrow \infty }\frac{1}{mT_m}\log \mathcal {B}=0\,. \end{aligned}$$
(5.5)

By Cauchy–Schwarz inequality and the fact that \(B_{0,t} {{\mathop {=}\limits ^{\text {law}}}} B_{0,t}(t-\cdot )\), we have

$$\begin{aligned}&\mathbb {E}\exp \left\{ \frac{q}{m} \int _0^{t_m} \sum _{1 \le j< k \le m} \left( \gamma - \gamma _{\varepsilon } \right) \left( B_{0, t_m}^j(s)- B_{0, t_m}^k(s) \right) ds \right\} \\&\quad \le \mathbb {E}\exp \left\{ \frac{2q}{m} \int _0^{\frac{t_m}{2}} \sum _{1 \le j < k \le m} \left( \gamma - \gamma _{\varepsilon } \right) \left( B_{0, t_m}^j(s)- B_{0, t_m}^k(s) \right) ds \right\} \,, \end{aligned}$$

Together with (2.9), we arrive at

$$\begin{aligned}&\mathbb {E}\exp \left\{ \frac{q}{m} \int _0^{t_m} \sum _{1 \le j< k \le m} \left( \gamma - \gamma _{\varepsilon } \right) \left( B_{0, t_m}^j(s)- B_{0, t_m}^k(s) \right) ds \right\} \\&\quad \le 2^{m \ell } \mathbb {E}\exp \left\{ \frac{2q}{m} \int _0^{\frac{t_m}{2}} \sum _{1 \le j < k \le m} \left( \gamma - \gamma _{\varepsilon } \right) \left( B^j(s)- B^k(s) \right) ds \right\} \,. \end{aligned}$$

Note that the right hand side of the above inequality is the m-th moment of the solution to the Eq. (1.1) driven by the noise with spatial covariance \(\frac{2q}{m}\left( \gamma - \gamma _{\varepsilon }\right) \), i.e., \(\mathbb {E}u(\frac{t_m}{2}, x)^m\), the initial condition is \(u_0(x) \equiv 2^{\ell }\). Using the hyper-contractivity as in [16, 19], we have

$$\begin{aligned}&\mathbb {E}\exp \left\{ \frac{2q}{m} \int _0^{\frac{t_m}{2}} \sum _{1 \le j< k \le m} \left( \gamma - \gamma _{\varepsilon } \right) \left( B^j(s)- B^k(s) \right) ds \right\} \\&\quad \le \left[ \mathbb {E}\exp \left\{ \frac{2q(m-1)}{m} \int _0^{\frac{t_m}{2}} \left( \gamma - \gamma _{\varepsilon } \right) \left( B^1(s)- B^2(s) \right) ds \right\} \right] ^{\frac{m}{2}} \\&\quad \le \left[ \sum _{k=0}^{\infty } (2q)^k \int _{[0, \frac{t_m}{2}]^k_{<}} \int _{\mathbb {R}^{\ell k}} \prod _{j=1}^k \left( e^{-|\eta _j|^2 } (s_{j+1}-s_j)^{-\frac{\alpha }{2}}\right) \right. \\&\qquad \left. \prod _{j=1}^k \left( 1-e^{-\varepsilon (s_{j+1} - s_j)^{-1} |\eta _j|^2} \right) \mu (\eta )d\eta ds \right] ^{\frac{m}{2}} \end{aligned}$$

where in the last line we have used the estimate (3.7) in [15], \([0,\frac{t_m}{2}]^k_<=\{(s_1,\dots ,s_k)\in [0,\frac{t_m}{2}]^k:s_1<\cdots <s_k \}\) and \(\mu (\eta )d \eta ds\) is abbreviation for \(\prod _{j=1}^k \mu (\eta _j)d\eta _j ds_j\). Since \(\alpha < 2\), we can find a \(\beta >0\) such that \(\beta < 1-\frac{\alpha }{2}\). Then using the elementary inequality

$$\begin{aligned} 1-e^{-x} \le C_{\beta }x^{\beta }\quad \forall x>0\,, \end{aligned}$$

and asymptotic behavior of Mittag-Leffler function ([12, p. 208]), we obtain

$$\begin{aligned}&\sum _{k=0}^{\infty } (2q)^k \int _{[0, \frac{t_m}{2}]^k_{<}} \int _{\mathbb {R}^{\ell k}} \prod _{j=1}^k \left( e^{-|\eta _j|^2 } (s_{j+1}-s_j)^{-\frac{\alpha }{2}}\right) \prod _{j=1}^k \left( 1-e^{-\varepsilon (s_{j+1} - s_j)^{-1} |\eta _j|^2} \right) \mu (\eta )d\eta ds\\&\quad \le \sum _{k=0}^{\infty } (C_{\beta }2q\varepsilon ^{\beta })^k \int _{[0, \frac{t_m}{2}]^k_{<}} \int _{\mathbb {R}^{\ell k}} \prod _{j=1}^k \left( e^{-|\eta _j|^2 } |\eta _j|^{2\beta }\right) (s_{j+1}-s_j)^{-\frac{\alpha }{2}-\beta } \mu (\eta )d\eta ds\\&\quad \le \sum _{k=0}^{\infty } \frac{(Cq)^k t_m^{(-\frac{\alpha }{2} -\beta +1) k } \varepsilon ^{k\beta }}{\Gamma ((-\frac{\alpha }{2} -\beta +1) k + 1)} \le C \exp \left( c t_m \varepsilon ^{\frac{\beta }{ -\frac{\alpha }{2} -\beta +1}} \right) \,. \end{aligned}$$

Hence, we have shown

$$\begin{aligned} \mathcal {B}\le C^m \exp \left( m T_m \varepsilon ^{\frac{\beta }{ -\frac{\alpha }{2} -\beta +1}} \right) \,, \end{aligned}$$

from which (5.5) follows. The proof for (5.3) is complete. \(\square \)

5.2 Hölder continuity

We investigate the regularity of the process \(\frac{\mathcal {Z}(x;t,y)}{p_t(y-x)}\) in the variables x and y. These properties will be used in the proof of upper bound. For each integer \(m\ge 2\) and \(t>0\), we recall that \(\Theta _t(m)\) is defined in (2.10).

Note that from Proposition 4.4, we have

$$\begin{aligned} \sup _{s\in (0, t]}\sup _{x,y_1,\dots ,y_{m}\in \mathbb {R}^\ell }\mathbb {E}\prod _{j=1}^{m}\frac{\mathcal {Z}(x;s,y_j)}{p_s(y_j-x)}=\Theta _t(m)\,. \end{aligned}$$
(5.6)

Lemma 5.4

For every \(r>0\) and \(y_1,y_2\in \mathbb {R}^\ell \)

$$\begin{aligned} \Vert |p_r(\cdot -y_1)-p_r(\cdot -y_2)|\Vert ^2_{\mathfrak {H}_0}\le C r^{-\frac{\alpha }{2}}\left( \frac{|y_2-y_1|}{r^{1/2}}\wedge 1 \right) \end{aligned}$$

under  (H.2); and

$$\begin{aligned} \Vert |p_r(\cdot -y_1)-p_r(\cdot -y_2)|\Vert ^2_{\mathfrak {H}_0}\le C \left( \frac{|y_2-y_1|^2}{r}\wedge 1 \right) \end{aligned}$$

under (H.1). In the above, the constant C does not depend on \(y_1,y_2\) nor r.

Proof

We denote \(f(\cdot )=|p_r(\cdot -y_1)-p_r(\cdot -y_2)|\). Assuming first (H.2), we observe the following simple estimate

$$\begin{aligned} \iint _{\mathbb {R}^\ell \times \mathbb {R}^\ell }f(y)f(z)\gamma (y-z)dydz\le \sup _{z\in \mathbb {R}^\ell }|f*\gamma (z)| \int _{\mathbb {R}^\ell }f(y)dy\,. \end{aligned}$$

Noting that

$$\begin{aligned} \sup _{z\in \mathbb {R}^\ell }|f*\gamma (z)|\le 2\sup _{z\in \mathbb {R}^\ell }|p_r*\gamma (z)|=2p_r*\gamma (0)\lesssim r^{-\frac{\alpha }{2}} \end{aligned}$$

and

$$\begin{aligned} \int _{\mathbb {R}^\ell }f(y)dy\lesssim \left( \frac{|y_2-y_1|}{r^{1/2}}\wedge 1 \right) \,, \end{aligned}$$
(5.7)

the result easily follows. Under (H.1), we used the following inequality

$$\begin{aligned} \iint _{\mathbb {R}^\ell \times \mathbb {R}^\ell }f(y)f(z)\gamma (y-z)dydz\le \gamma (0)\left( \int _{\mathbb {R}^\ell }f(y)dy \right) ^2 \end{aligned}$$

together with (5.7) to obtain the result. \(\square \)

Proposition 5.5

Assuming (H.1) or (H.2). There exists a constant \(\eta \in (0,1)\) such that for every compact set K and every integer \(m\ge 2\),

$$\begin{aligned} \sup _{w\in \mathbb {R}^\ell } \left\| \sup _{\begin{array}{c} x_1,x_2\in K,\\ y\in B(w,1) \end{array}} \frac{\left| \frac{\mathcal {Z}(x_1;t,y)}{p_t(y-x_1)}-\frac{\mathcal {Z}(x_2;t,y)}{p_t(y-x_2)}\right| }{|x_2-x_1|^{\eta }} \right\| _{L^m(\Omega )}\le c_K(t) m^{\frac{1}{2}}[\Theta _t(m)]^{\frac{1}{m}}e^{cm^{\frac{2}{2-\bar{\alpha }}}}\,, \end{aligned}$$
(5.8)

and

$$\begin{aligned} \sup _{w, x\in \mathbb {R}^\ell } \left\| \sup _{ y_1,y_2\in B(w,1)} \frac{\left| \frac{\mathcal {Z}(x;t,y_1)}{p_t(y_1-x)}-\frac{\mathcal {Z}(x;t,y_2)}{p_t(y_2-x)}\right| }{|y_2-y_1|^{\eta }} \right\| _{L^m(\Omega )}\le c_K(t) m^{\frac{1}{2}}[\Theta _t(m)]^{\frac{1}{m}}\,, \end{aligned}$$
(5.9)

where B(w, 1) is the closed unit ball in \(\mathbb {R}^\ell \) centered at w. In the above, the constant c depends only on \(\bar{\alpha }\) and \(\eta \) and \(c_K(t)\) depends only on \(K,t, \eta \).

Proof

We present the proof under hypothesis (H.2) in detail. The proof for the other case is similar and is omitted. We first show that for every \(\eta \in (0,2- \alpha )\),

$$\begin{aligned} \sup _{x\in \mathbb {R}^\ell }\left\| \frac{\mathcal {Z}(x_1;t,x)}{p_t(x-x_1)}-\frac{\mathcal {Z}(x_2;t,x)}{p_t(x-x_2)}\right\| _{L^m(\Omega )} \lesssim _t \sqrt{m}[\Theta _t(m)]^{\frac{1}{m}}|x_2-x_1|^{\frac{\eta }{2}} e^{cm^{\frac{2}{2-\alpha }}}\,. \end{aligned}$$
(5.10)

In the above, we have added a subscript t to \(\lesssim \) to emphasize that the implied constant depends on t. Fix \(t>0\) and \(x_1,x_2,x\in \mathbb {R}^\ell \). From (2.7), we have

$$\begin{aligned}&\frac{\mathcal {Z}(x_1;t,x)}{p_t(x-x_1)}-\frac{\mathcal {Z}(x_2;t,x)}{p_t(x-x_2)} =\int _0^t\int _{\mathbb {R}^\ell } f(s,y) \frac{\mathcal {Z}(x_1;s,y)}{p_s(y-x_1)} W(ds,dy)\nonumber \\&\quad +\int _0^t\int _{\mathbb {R}^\ell }p_{\frac{s(t-s)}{t}}\left( y-x_2-\frac{s}{t}(x-x_2) \right) \left[ \frac{\mathcal {Z}(x_2;s,y)}{p_t(y-x_2)}-\frac{\mathcal {Z}(x_1;s,y)}{p_t(y-x_1)}\right] W(ds,dy)\nonumber \\ \end{aligned}$$
(5.11)

where

$$\begin{aligned} f(s,y)=p_{\frac{s(t-s)}{t}} \left( y-x_1-\frac{s}{t}(x-x_1)\right) - p_{\frac{s(t-s)}{t}}\left( y-x_2-\frac{s}{t}(x-x_2)\right) \,. \end{aligned}$$

Obviously f also depends on \(t,x_1,x_2\) and x, however these parameters will be omitted. For each integer \(m\ge 2\), applying Lemma 2.1 we see that

$$\begin{aligned} \left\| \int _0^t\int _{\mathbb {R}^\ell } f(s,y) \frac{\mathcal {Z}(x_1;s,y)}{p_s(y-x_1)} W(ds,dy)\right\| _{L^m(\Omega )} \le \sqrt{4m} [\Theta _t(m)]^{\frac{1}{m}}\Vert |f(s,y)| \mathbf {1}_{[0,t]}(s) \Vert _{\mathfrak {H}_{s,y}} \,. \end{aligned}$$

Applying Lemma 5.4, for every \(\eta \in (0,2- \alpha )\), there exists \(c_{\eta }>0\) such that

$$\begin{aligned} \Vert |f(s,y)| \mathbf {1}_{[0,t]}(s) \Vert _{\mathfrak {H}_{s,y}}\le c_{\eta }t^{\frac{1}{2}-\frac{\alpha +\eta }{4}} |x_2-x_1|^{\frac{\eta }{2}}\,. \end{aligned}$$

Hence,

$$\begin{aligned} \left\| \int _0^t\int _{\mathbb {R}^\ell } f(s,y) \frac{\mathcal {Z}(x_1;s,y)}{p_s(y-x_1)} W(ds,dy)\right\| _{L^m(\Omega )} \le c_{\eta }t^{\frac{1}{2}-\frac{\alpha +\eta }{4}} \sqrt{m}[\Theta _t(m)]^{\frac{1}{m}}|x_2-x_1|^{\frac{\eta }{2}}\,. \end{aligned}$$
(5.12)

For each \(s>0\), we set

$$\begin{aligned} M_s=\sup _{x\in \mathbb {R}^\ell }\left\| \frac{\mathcal {Z}(x_1;s,x)}{p_s(x-x_1)} -\frac{\mathcal {Z}(x_2;s,x)}{p_s(x-x_2)}\right\| _{L^m(\Omega )}\,. \end{aligned}$$

It follows from Lemma 2.1 that

$$\begin{aligned}&\left\| \int _0^t\int _{\mathbb {R}^\ell }p_{\frac{s(t-s)}{t}}\left( y-x_2-\frac{s}{t}(x-x_2) \right) \left[ \frac{\mathcal {Z}(x_2;s,y)}{p_t(y-x_2)}-\frac{\mathcal {Z}(x_1;s,y)}{p_t(y-x_1)} \right] W(ds,dy)\right\| _{L^m(\Omega )}\\&\quad \le c \sqrt{m}\left( \int _0^t \left\| p_{\frac{s(t-s)}{t}} \left( \cdot -x_2-\frac{s}{t}(x-x_2) \right) \right\| ^2_{\mathfrak {H}_{0}}M_s^2ds \right) ^{\frac{1}{2}}\\&\quad =c\sqrt{m}\left( \int _0^t \left( \frac{s(t-s)}{t} \right) ^{-\frac{\alpha }{2}}M_s^2 ds \right) ^{\frac{1}{2}}\,, \end{aligned}$$

where c is some constant. Applying these estimates in (5.11) yields

$$\begin{aligned} M_t\le c_ \eta t^{\frac{1}{2}-\frac{\alpha +\eta }{2}} \sqrt{m}[\Theta _t(m)]^{\frac{1}{m}}|x_2-x_1|^{\frac{\eta }{2}}+c\sqrt{m}\left( \int _0^t \left( \frac{s(t-s)}{t} \right) ^{-\frac{\alpha }{2}}M_s^2 ds \right) ^{\frac{1}{2}}\,. \end{aligned}$$

We now apply Lemma 2.4 to get

$$\begin{aligned} M_t\lesssim _t \sqrt{m}[\Theta _t(m)]^{\frac{1}{m}}|x_2-x_1|^{\frac{\eta }{2}}e^{c m^{\frac{2}{2-\alpha }}}\,, \end{aligned}$$

which is exactly (5.10).

To complete the proof of the estimate (5.8). Fix \(t>0\) and \(x_1,x_2,y_1,y_2\in \mathbb {R}^\ell \). Observe that

$$\begin{aligned}&\frac{\mathcal {Z}(x_1;t,y_1)}{p_t(y_1-x_1)}-\frac{\mathcal {Z}(x_2;t,y_2)}{p_t(y_2-x_2)}\\&\quad =\int _0^t\int _{\mathbb {R}^\ell } g(s,y) \frac{\mathcal {Z}(x_1;s,y)}{p_s(y-x_1)} W(ds,dy)\\&\qquad +\!\int _0^t\int _{\mathbb {R}^\ell }p_{\frac{s(t-s)}{t}}\left( y\!-\!x_2-\frac{s}{t}(y_2\!-\!x_2) \right) \left[ \frac{\mathcal {Z}(x_2;s,y)}{p_t(y-x_2)}-\frac{\mathcal {Z}(x_1;s,y)}{p_t(y-x_1)}\right] W(ds,dy)\\&\quad =I_1+I_2\,, \end{aligned}$$

where

$$\begin{aligned} g(s,y)=p_{\frac{s(t-s)}{t}} \left( y-x_1-\frac{s}{t}(y_1-x_1)\right) - p_{\frac{s(t-s)}{t}}\left( y-x_2-\frac{s}{t}(y_2-x_2)\right) \,. \end{aligned}$$

Similar to (5.12), we have

$$\begin{aligned} \left\| I_1\right\| _{L^m(\Omega )} \lesssim _t\sqrt{m} [\Theta _t(m)]^{\frac{1}{m}}(|x_2-x_1|+|y_2-y_1|)^{\frac{\eta }{2}} \,. \end{aligned}$$
(5.13)

\(I_2\) can be estimated using Lemma 2.1 and (5.10)

$$\begin{aligned} \Vert I_2\Vert _{L^m(\Omega )}\lesssim _t \sqrt{m} [\Theta _t(m)]^{\frac{1}{m}}|x_2-x_1|^{\frac{\eta }{2}} e^{c m ^{\frac{2}{2-\alpha }}}\,. \end{aligned}$$

Hence, we have shown

$$\begin{aligned} \left\| \frac{\mathcal {Z}(x_1;t,y_1)}{p_t(y_1-x_1)}-\frac{\mathcal {Z}(x_2;t,y_2)}{p_t(y_2-x_2)}\right\| _{L^m(\Omega )}\lesssim _t\sqrt{m} [\Theta _t(m)]^{\frac{1}{m}}(|x_2-x_1|+|y_2-y_1|)^{\frac{\eta }{2}} e^{cm^{\frac{2}{2-\alpha }}} \,. \end{aligned}$$

At this point, the estimate (5.8) follows from the Garsia-Rodemich-Rumsey inequality (cf. [13]).

The proof of (5.9) is simpler. Actually, by writing

$$\begin{aligned}&\frac{\mathcal {Z}(x;t,y_1)}{p_t(y_1-x)}-\frac{\mathcal {Z}(x;t,y_2)}{p_t(y_2-x)}\\&\quad =\int _0^t\int _{\mathbb {R}^\ell } \left( p_{\frac{s(t-s)}{t}} \left( y-x-\frac{s}{t}(y_1-x)\right) - p_{\frac{s(t-s)}{t}}\left( y-x-\frac{s}{t}(y_2-x)\right) \right) \\&\qquad \frac{\mathcal {Z}(x;s,y)}{p_s(y-x)} W(ds,dy)\,, \end{aligned}$$

we get an estimate for \(\Vert \frac{\mathcal {Z}(x;t,y_1)}{p_t(y_1-x)}-\frac{\mathcal {Z}(x;t,y_2)}{p_t(y_2-x)}\Vert _{L^m(\Omega )}\) as in (5.13). The estimate (5.9) again follows from the Garsia-Rodemich-Rumsey inequality (cf. [13]). We omit the details. \(\square \)

In proving (1.26), we need to handle the asymptotic of \(\sup _{\varepsilon <1}\sup _ {x\in K,|y|\le R} \frac{\mathcal {Z}_{\varepsilon }(x;t,y)}{p_t(y-x)}\), thus we write down the Hölder continuity result for \(\frac{\mathcal {Z}_{\varepsilon }(x;t,y)}{p_t(y-x)}\) with respect to \(\varepsilon ,x,y\). The proof is similar to Proposition 5.5 and is left to the reader.

Proposition 5.6

Assuming (H.1) or (H.2). There exists a constant \(\eta \in (0,1)\) such that for every compact set K and every integer \(m\ge 2\),

$$\begin{aligned} \sup _{w\in \mathbb {R}^\ell } \left\| \sup _{\begin{array}{c} x_1,x_2\in K, y\in B(w,1)\\ \varepsilon , \varepsilon ' \in (0,1] \end{array}} \frac{\left| \frac{\mathcal {Z}_{\varepsilon }(x_1;t,y)}{p_t(y-x_1)}-\frac{\mathcal {Z}_{\varepsilon '}(x_2;t,y)}{p_t(y-x_2)}\right| }{(|x_2-x_1|+ |\varepsilon - \varepsilon '|)^{\eta }} \right\| _{L^m(\Omega )}\le c_K(t) m^{\frac{1}{2}}[\Theta _t(m)]^{\frac{1}{m}}e^{cm^{\frac{2}{2-\bar{\alpha }}}}\,, \end{aligned}$$
(5.14)

and

$$\begin{aligned} \sup _{w, x\in \mathbb {R}^\ell ;\varepsilon \le 1} \left\| \sup _{ y_1,y_2\in B(w,1)} \frac{\left| \frac{\mathcal {Z}_{\varepsilon }(x;t,y_1)}{p_t(y_1-x)}-\frac{\mathcal {Z}_{\varepsilon }(x;t,y_2)}{p_t(y_2-x)}\right| }{|y_2-y_1|^{\eta }} \right\| _{L^m(\Omega )}\le c_K(t) m^{\frac{1}{2}}[\Theta _t(m)]^{\frac{1}{m}}\,. \end{aligned}$$
(5.15)

In the above, the constant c depends only on \(\bar{\alpha }\) and \(\eta \) and \(c_K(t)\) depends only on \(K,t, \eta \).

6 Spatial asymptotic

In this section we study the asymptotic of

$$\begin{aligned} \sup _{|y|\le R}\frac{u(t,y)}{p_t*u_0(y)} \end{aligned}$$

as described in Theorems 1.31.4 and 1.5. In what follows, we denote

$$\begin{aligned} a=\frac{2}{4-\bar{\alpha }} \end{aligned}$$
(6.1)

where we recall that \(\bar{\alpha }\) is defined in (1.19). Since \(0\le \bar{\alpha }<2\), a ranges inside the interval [1 / 2, 1). Because \(R\mapsto \sup _{|y|\le R}\frac{u(t,y)}{p_t*u_0(y)}\) is monotone, it suffices to show these results along lattice sequence \(R\in \{e^n\}_{n\ge 1}\).

6.1 The upper bound

This subsection is devoted to the proof of upper bounds in Theorems 1.3 and 1.4 by combining the moment asymptotic bounds and the regularity estimates obtained in Sect. 5. We also recall that \(\Theta _t(m)\) is defined in (2.10). Propositions 5.15.3 together with (5.6) imply

$$\begin{aligned} \limsup _{m\rightarrow \infty }m^{-\frac{4- \bar{\alpha }}{2- \bar{\alpha }}}\log \Theta _t(m)\le \frac{t}{2}{\mathcal {E}}\,, \end{aligned}$$
(6.2)

where \({\mathcal {E}}\) is defined in (1.19). The following result gives an upper bound for spatial asymptotic of \(\mathcal {Z}(x;\cdot ,\cdot )\).

Theorem 6.1

For every compact set K, we have

$$\begin{aligned} \limsup _{n\rightarrow \infty }n^{-a}\sup _{x\in K,|y|\le e^n}\left( \log \mathcal {Z}(x;t,y)+\frac{|y-x|^2}{2t} \right) \le \frac{4- \bar{\alpha }}{2}\ell ^{\frac{2}{4- \bar{\alpha }}}\left( \frac{{\mathcal {E}}}{2- \bar{\alpha }}t\right) ^{1-a} \end{aligned}$$
(6.3)

Proof

We begin by noting that according to Remark 4.5, \(\mathcal {Z}(x;t,y)\) is non negative a.s. for each xyt. Let t be fixed and put

$$\begin{aligned} \mathcal {K}(x,y)=\frac{\mathcal {Z}(x;t,y)}{p_t(y-x)}\,, \end{aligned}$$

where we have omitted the dependence on t. For every \(n>1\) and every \(\lambda >0\), we consider the probability

$$\begin{aligned} P_n:=P\left( \sup _{x\in K,|y|\le e^n}\log \mathcal {K}(x,y)> \lambda n^a\right) \,. \end{aligned}$$

Let b be a fixed number such that \(a< b <1\). We can find the points \(x_i\), \(i = 1, \dots , M_{n}\), such that \(K \subset \cup _{i=1}^{M_n}B(x_i, e^{-n^b})\) and \(M_n\lesssim e^{\ell n^b}\). In addition, by partitioning the ball \(B(0,e^n)\) into unit balls, we see that \(P_n\) is at most

$$\begin{aligned} c(\ell ) e^{\ell n+\ell n^b}\sup _{w\in \mathbb {R}^\ell , x_i}P\left( \sup _{x\in B(x_i, e^{-n^b}), y\in B(w,1)}\mathcal {K}(x,y)> e^{\lambda n^a}\right) \,. \end{aligned}$$

Applying Chebychev inequality, we see that

$$\begin{aligned} P\left( \sup _{x\in B(x_i, e^{-n^b}),y\in B(w,1)}\mathcal {K}(x,y)> e^{\lambda n^a}\right) \le e^{-\lambda m n^a}\left\| \sup _{x\in B(x_i, e^{-n^b}),y\in B(w,1)}\mathcal {K}(x,y) \right\| _{L^m(\Omega )}^m\,. \end{aligned}$$

The above m-th moment is estimated by triangle inequality

$$\begin{aligned}&\left\| \sup _{{x\in B(x_i, e^{-n^b})},y\in B(w,1)}\mathcal {K}(x,y) \right\| _{L^m(\Omega )}^m\\&\quad \le 3^m \left\| \sup _{{x\in B(x_i, e^{-n^b})},y\in B(w,1)} \left| \mathcal {K}(x,y)-\mathcal {K}(x_i,y)\right| \right\| _{L^m(\Omega )}^m\\&\qquad + 3^m \left\| \sup _{y\in B(w,1)} \left| \mathcal {K}(x_i,y)-\mathcal {K}(x_i,w)\right| \right\| _{L^m(\Omega )}^m\\&\qquad + 3^m \left\| \mathcal {K}(x_i,w)\right\| _{L^m(\Omega )}^m\\&\quad :=3^m( I_1 + I_2 + I_3)\,. \end{aligned}$$

Using Proposition 5.5 and (5.6), we see that

$$\begin{aligned} I_1\lesssim e^{-\eta m n^b+ c m ^{\frac{1}{1-a}}} \Theta _t(m)\,, \quad I_2 \lesssim m^{\frac{m}{2}} \Theta _t(m)\,, \quad I_3\le \Theta _t(m)\,. \end{aligned}$$

Altogether, we have

$$\begin{aligned} P_n \lesssim 3^me^{\ell n^b+\ell n-\lambda m n^a} \left( e^{-\eta m n^b+c m ^{\frac{1}{1-a} }} \Theta _t(m)+ m^{\frac{m}{2}} \Theta _t(m) \right) \,. \end{aligned}$$
(6.4)

For each \(\beta >0\), we choose \(m= \lfloor \beta n^{1-a} \rfloor \). In addition, for every fixed \(\varepsilon >0\), (6.2) yields

$$\begin{aligned} \log \Theta _t(\lfloor \beta n^{1-a}\rfloor )\le \left( \frac{t}{2}{\mathcal {E}}+\varepsilon \right) \beta ^{\frac{1}{1-a}}n \end{aligned}$$
(6.5)

for all n sufficiently large. It follows from (6.4) and (6.5) that

$$\begin{aligned} \sum _{n=1}^\infty P\left( \sup _{x\in K,|y|\le e^n}\log \mathcal {K}(x,y)> \lambda n^{a}\right) \lesssim S_1+S_2\,, \end{aligned}$$
(6.6)

where

$$\begin{aligned} S_1&=\sum _{n=1}^\infty \exp \left\{ \ell n^b+\beta (\log 3) n^{1-a} + (\ell - \lambda \beta +c \beta ^{\frac{1}{1-a}})n - \eta \beta n^{1-a+b}\right\} \,,\\ S_2&=\sum _{n=1}^\infty \exp \left\{ -\ell n^b + n \ell - \lambda \beta n + \left( \frac{t}{2}{\mathcal {E}}+\varepsilon \right) \beta ^{\frac{1}{1-a}}n \right\} \,. \end{aligned}$$

Since \(1-a+b>1\), the term \(e^{-\eta \beta n^{1-a+b}}\) is dominant, and hence, \(S_1\) is finite for every \(\lambda ,\beta >0\). To ensure the convergence of \(S_2\), we choose \(\lambda \) such that

$$\begin{aligned} \lambda > \ell \beta ^{-1}+(\frac{t}{2}{\mathcal {E}}+\varepsilon ) \beta ^{\frac{a}{1-a}}\,. \end{aligned}$$
(6.7)

It follows that the series on the right hand side of (6.6) is finite. By Borel-Cantelli lemma, we have almost surely

$$\begin{aligned} \limsup _{n\rightarrow \infty } n^{-a}\sup _{x\in K,|y|\le e^n}\log \mathcal {K}(x,y)\le \lambda \,. \end{aligned}$$

Evidently, the best choice for \(\lambda \) is

$$\begin{aligned} \lambda _0:&=\inf _{\varepsilon>0,\beta >0}\left\{ \ell \beta ^{-1}+(\frac{t}{2}{\mathcal {E}}+\varepsilon ) \beta ^{\frac{a}{1-a}}\right\} \nonumber \\&=\frac{4- \bar{\alpha }}{2}\ell ^{\frac{2}{4- \bar{\alpha }}} \left( \frac{t{\mathcal {E}}}{2- \bar{\alpha }} \right) ^{\frac{2- \bar{\alpha }}{4- \bar{\alpha }}}\,, \end{aligned}$$
(6.8)

which yields (6.3). \(\square \)

Remark 6.2

Using Proposition 5.6 and analogous arguments in Theorem 6.1, we can show that

$$\begin{aligned}&\limsup _{R\rightarrow \infty }(\log R)^{-\frac{2}{4- \bar{\alpha }}}\sup _{x\in K, \varepsilon \in (0,1], |y|\le R}\left( \log \mathcal {Z}_{\varepsilon }(x;t,y)+\frac{|y-x|^2}{2t} \right) \nonumber \\&\quad \le \frac{4- \bar{\alpha }}{2}\ell ^{\frac{2}{4-\bar{\alpha }}}\left( \frac{{\mathcal {E}}}{2- \bar{\alpha }}t\right) ^{\frac{2- \bar{\alpha }}{4- \bar{\alpha }}}\,. \end{aligned}$$
(6.9)

We omit the details.

6.2 The lower bound

We now focus on the lower bound of (1.15) and (1.26). To start, we explain an issue of using the localization procedure as in [3, 7]. In these papers, a localized version of the Eq. (1.1) is introduced, i.e.

$$\begin{aligned} U^{\beta }(t,x) = 1 + \int _0^t \int _{|y-x|\le \beta \sqrt{t}} p_{t-s}(x-y) U^{\beta }(s,y)W(ds,dy)\,, \end{aligned}$$
(6.10)

for some \(\beta >0\). For fixed t and \(\beta \) sufficiently large, \(\sup _{|x|\le R} U^\beta (t,x)\) gives a good approximation for \(\sup _{|x|\le R}u(t,x)\) as \(R\rightarrow \infty \). In our situation, suppose for instance that \(u_0=\delta (\cdot -x_0)\), the random field \(\frac{\mathcal {Z}(x_0; t,x)}{p_t(x-x_0)}\) satisfies the equation

$$\begin{aligned} \frac{\mathcal {Z}(x_0; t,x)}{p_{t}(x-x_0)} = 1 + \int _0^t \int _{\mathbb {R}^{\ell }} p_{\frac{s(t-s)}{t}}\left( y-x_0-\frac{s}{t}(x-x_0) \right) \frac{\mathcal {Z}(x_0; s,y)}{p_s(y-x_0)}W(ds,dy)\,. \end{aligned}$$
(6.11)

Since the kernel \(p_{\frac{s(t-s)}{t}}\left( y-x_0-\frac{s}{t}(x-x_0)\right) \) now involves s and t with s moving from 0 to t, the mass concentration of the stochastic integration on the right-hand side of (6.11) varies and depends on s. We are not able to find a fixed localized integration domain similar as \(\{y:|y-x|\le \beta \sqrt{t}\}\). To get around this difficulty, we propose an alternative result (Theorem 1.5) which is about the regularized version of \(\mathcal {Z}\), i.e., \(\mathcal {Z}_{\varepsilon }\). To handle the spatial asymptotic of \(\mathcal {Z}_ {\varepsilon }\), we rely on the Feynman–Kac representation (4.2) and adopt an argument developed by Xia Chen in [3] with an additional scaling procedure.

Hereafter, t and \(\varepsilon \) are fixed positive constants, n is the driving parameter which tends to infinity,

$$\begin{aligned} \varepsilon _n= \left\{ \begin{array}{ll} 0&{} \text {if (H.1) holds}\\ \varepsilon \left( \frac{t}{n} \right) ^a &{}\text {if (H.2) holds}\,. \end{array} \right. \end{aligned}$$
(6.12)

Let \(y_1,\dots ,y_N\) be N points in \(B(0,e^n)\) and d be a positive number such that

$$\begin{aligned} N\lesssim e^{\ell n}\quad \text{ and }\quad |y_j-y_k|\ge d\quad \forall j\ne k\,. \end{aligned}$$
(6.13)

Under (H.1), d is chosen to be sufficiently large, depending on the shape of \(\gamma \), while under (H.2), we can simply choose \(d=1\). See Lemma 6.4 below for more details.

Theorem 6.3

For every \(x_0\in \mathbb {R}^\ell \)

$$\begin{aligned} \liminf _{n\rightarrow \infty }n^{-a}\sup _{|y|\le e^n}\sup _{\varepsilon \in (0,1)} \log \frac{\mathcal {Z}_\varepsilon (x_0;t,y)}{p_t(y-x_0)}\ge \frac{4-\bar{\alpha }}{2}\ell ^{\frac{2}{4-\bar{\alpha }}}\left( \frac{{\mathcal {E}}}{2-\bar{\alpha }}t \right) ^{\frac{2-\bar{\alpha }}{4-\bar{\alpha }}} \end{aligned}$$
(6.14)

Proof

Step 1: Let \(m=m_n\) be a natural number such that

$$\begin{aligned} {\lim _{n\rightarrow \infty } \frac{n^{1-a}}{m_n} \rightarrow 0}\,. \end{aligned}$$
(6.15)

Under hypothesis (H.1), for each j, we define the stopping time

$$\begin{aligned} \tau ^j=\inf \left\{ s\ge 0:|B^j(s)|\ge r_0 \right\} \end{aligned}$$
(6.16)

where \(r_0>0\) is chosen so that

$$\begin{aligned} \inf _{|x|< 2 r_0}\gamma (x)>0\,. \end{aligned}$$
(6.17)

Such a constant always exists since \(\gamma \) is continuous and \(\gamma (0)>0\). Under hypothesis (H.2), the stopping time depends on n and an arbitrary domain. More precisely, let D be an open bounded ball in \(\mathbb {R}^\ell \) which contains 0. For each j, \(\tau ^j=\tau ^j_n(D)\) denotes the stopping time

$$\begin{aligned} \tau _n^j(D)=\inf \left\{ s\ge 0:B^j(s)\not \in \left( \frac{t}{n}\right) ^{\frac{a}{2}} D \right\} \,. \end{aligned}$$
(6.18)

As previously, we denote

$$\begin{aligned} \mathcal {K}_ {\varepsilon _n}(x,y)=\frac{\mathcal {Z}_ {\varepsilon _n}(x;t,y)}{p_t(y-x)}\,, \end{aligned}$$

omitting the dependence on t. We note that from (4.2)

$$\begin{aligned}&\left( \mathcal {K}_{\varepsilon _n}(x_0,y) \right) ^m\\&\quad = \mathbb {E}_B \exp \left( \sum _{j=1}^m \int _0^t \int _{\mathbb {R}^{\ell }} \delta \left( B_{0,t}^j(t-s)+\frac{t-s}{t}x_0 + \frac{s}{t}y-z\right) W_ {\varepsilon _n}(ds,dz) - \frac{tm}{2}\gamma _ {\varepsilon _n}(0) \right) \\&\quad = e^{-\frac{tm}{2} \gamma _ {\varepsilon _n}(0)} \mathbb {E}_B e^{\xi _m(x_0,y)}, \end{aligned}$$

where

$$\begin{aligned} \xi _m(x_0,y)=\sum _{j=1}^m \int _{0}^t \int _{\mathbb {R}^{\ell }} \delta \left( B_{0,t}^j(t-s)+\frac{t-s}{t}x_0 + \frac{s}{t}y-z\right) W_ {\varepsilon _n}(ds,dz)\,. \end{aligned}$$
(6.19)

Conditioning on B, the variance of \(\xi _m(x_0,y)\) is given by

$$\begin{aligned} S_m^2=\mathbb {E}_B (\xi _m(x_0,y)^2)= \sum _{j,k=1}^m\int _{0}^t \gamma _ {\varepsilon _n}(B_{0,t}^j(s)-B_{0,t}^k(s))ds\,. \end{aligned}$$

For every \(\lambda >0\), it is evident that

$$\begin{aligned} \mathbb {E}_B e^{\xi _m(x_0,y) }&\ge \mathbb {E}_B \left\{ e^{\lambda \sqrt{n} S_m(t)}; \xi _m(x_0,y)\ge \lambda \sqrt{n} S_m(t), \min _{1\le k\le m} \tau ^k\ge t \right\} \\&=[\mathbb {E}_B Z_m(n)]\eta _n(x_0,y) \,, \end{aligned}$$

where we have put

$$\begin{aligned} Z_m(n)= e^{\lambda \sqrt{n} S_m(t) } \mathbf {1} _{\{\min _{1\le j\le m} \tau ^j_n(D)\ge t \}}\,, \end{aligned}$$
(6.20)

and

$$\begin{aligned} \eta _n(x_0,y) := \left[ \mathbb {E}_B Z_m(n) \right] ^{-1} \mathbb {E}_B \left( Z_m(n) \mathbf {1} _{\{ \xi _m(x_0,y) \ge \lambda \sqrt{n} S_m(t) \}} \right) \,. \end{aligned}$$
(6.21)

Combining all previous estimates, we arrive at an important inequality

$$\begin{aligned} \mathcal {K}_{\varepsilon _n} (x_0,y)\ge e^{-\frac{t}{2} \gamma _{\varepsilon _n}(0)}[\mathbb {E}_B Z_m(n)]^{\frac{1}{m}}[\eta _n(x_0,y)]^{\frac{1}{m}}\,. \end{aligned}$$
(6.22)

It follows that

$$\begin{aligned} \sup _{j=1,\dots ,N}\mathcal {K}_{\varepsilon _n}(x_0,y_j)&\ge N^{-\frac{1}{m}}\left( \sum _{j=1}^N [\mathcal {K}_{\varepsilon _n}(x_0,y_j)]^m\right) ^{\frac{1}{m}}\\&\ge N^{-\frac{1}{m}}e^{-\frac{t}{2} \gamma _{\varepsilon _n}(0)}[\mathbb {E}_B Z_m(n)]^{\frac{1}{m}} \left( \sum _{j=1}^N\eta _n(x_0,y_j) \right) ^{\frac{1}{m}}\,. \end{aligned}$$

We put

$$\begin{aligned} \eta ^c_n(x_0)=\left[ \mathbb {E}_B Z_m(n) \right] ^{-1} \mathbb {E}_B \left( Z_m(n) \mathbf {1} _{\{ \max _{j=1,\dots ,N}\xi _m(x_0,y_j) < \lambda \sqrt{n} S_m(t) \}} \right) \,. \end{aligned}$$
(6.23)

Applying the estimate

$$\begin{aligned} \sum _{j=1}^N\eta _n(x_0,y_j) \ge 1- \eta ^c_n(x_0)\,, \end{aligned}$$

we obtain

$$\begin{aligned} \sup _{j=1,\dots ,N}\mathcal {K}_{\varepsilon _n}(x_0,y_j) \ge N^{-\frac{1}{m}}e^{-\frac{t}{2} \gamma _{\varepsilon _n}(0)}[\mathbb {E}_B Z_m(n)]^{\frac{1}{m}} [1- \eta _n^c(x_0)]^{\frac{1}{m}} \end{aligned}$$
(6.24)

Noting that \(N^{-\frac{1}{m}} \lesssim e^{\ell \frac{n}{m}} \) and by (1.24), \(\gamma _{\varepsilon _n}(0)=\varepsilon _n^{-\frac{\alpha }{2}}\gamma _1(0)\lesssim n^{\frac{\alpha }{2}a}\), we see that

$$\begin{aligned} \lim _{n\rightarrow \infty }n^{-a}\log \left( N^{-\frac{1}{m}}e^{-\frac{t}{2} \gamma _{\varepsilon _n}(0)}\right) =0\,. \end{aligned}$$
(6.25)

In other words, the factor \(N^{-\frac{1}{m}}e^{-\frac{t}{2} \gamma _{\varepsilon _n(0)}}\) in (6.24) is negligible. In addition, we claim that for every \(\lambda \in (0,\sqrt{2\ell })\) and every \(x\in \mathbb {R}^\ell \)

$$\begin{aligned} \lim _{n\rightarrow \infty }\eta _n^c(x_0)=0\quad \mathrm {a.s.} \end{aligned}$$
(6.26)

We postpone the proof of this claim till Lemma 6.4 below. It follows that

$$\begin{aligned} \liminf _{n\rightarrow \infty }n^{-a}\log \max _{j=1, \dots , N}\mathcal {K}_{\varepsilon _n}(x_0,y_j) \ge \liminf _{n\rightarrow \infty }n^{-a}m^{-1}\log \mathbb {E}_B Z_m(n)\,. \end{aligned}$$
(6.27)

Step 2: We will show that

$$\begin{aligned} \liminf _{\varepsilon \downarrow 0,D\uparrow \mathbb {R}^\ell }\liminf _{n\rightarrow \infty } n^{-a}m^{-1} \log \mathbb {E}_BZ_m(n)\ge \frac{4- \bar{\alpha }}{4} \lambda ^{\frac{4}{4- \bar{\alpha }}} \left( \frac{2t{\mathcal {E}}}{2- \bar{\alpha }} \right) ^{\frac{2- \bar{\alpha }}{4- \bar{\alpha }}} \,. \end{aligned}$$
(6.28)

We consider first the hypothesis (H.1). Since \(\gamma \) is continuous, for any \(\varepsilon >0\), there is \(\delta \) such that whenever \(|z|\le \delta \wedge r_0\), \(\gamma (z) \ge \gamma (0)-\varepsilon \). Hence,

$$\begin{aligned} \mathbb {E}_B Z_m(n) \ge \exp \left\{ \lambda \sqrt{n} \left[ m(m-1) t \left( \gamma (0)-\varepsilon \right) \right] ^{1/2} \right\} \mathbb {P}\left( \sup _{0 \le s \le t} |B_{0,t}^j (s)|\le \delta \wedge r_0 \right) ^m\,. \end{aligned}$$

Since as \(n \rightarrow \infty \), \(m \rightarrow \infty \) too, we have

$$\begin{aligned} \liminf _{n \rightarrow \infty } m^{-1} n^{-1/2} \log \mathbb {E}Z_m(n) \ge \lambda \sqrt{ t (\gamma (0)-\varepsilon )}\,, \end{aligned}$$

which proves (6.28) under (H.1).

Assume now that (H.2) holds. We put \(t_n=t^{1-a}n^a\) so that \(\varepsilon _n=\varepsilon \frac{t}{t_n}\). The Brownian motion scaling and the relation (1.24) yield

$$\begin{aligned} \int _0^t \gamma _{\varepsilon _n}(B^j_{0,t}(s)-B^k_{0,t}(s))ds&=\frac{t}{t_n} \int _0^{t_n} \gamma _{\varepsilon \frac{t}{t_n}}\left( B^j_{0,t}(s\frac{t}{t_n})-B^k_{0,t}(s\frac{t}{t_n})\right) ds\\&\overset{\mathrm {law}}{=}\frac{t}{t_n}\int _0^{t_n} \gamma _{\varepsilon \frac{t}{t_n}}\left( \sqrt{\frac{t}{t_n}}(B^j_{0,t_n}(s)-B^k_{0,t_n}(s))\right) ds\\&=\left( \frac{t}{t_n} \right) ^{1- \frac{\alpha }{2}}\int _0^{t_n}\gamma _ \varepsilon (B^j_{0,t_n}(s)-B^k_{0,t_n}(s))ds\,. \end{aligned}$$

It follows that

$$\begin{aligned} \mathbb {E}Z_m(n)=\mathbb {E}_B\left[ \exp \left\{ \lambda \left( t_n \sum _{j,k=1}^m\int _0^{t_n}\gamma _ \varepsilon (B^j_{0,t_n}(s)-B^k_{0,t_n}(s))ds \right) ^{\frac{1}{2}} \right\} ;\min _{1\le j\le m}\tau ^j_D\ge t_n\right] \,, \end{aligned}$$

where

$$\begin{aligned} \tau _D^j=\inf \left\{ s\ge 0:B^j(s)\not \in D \right\} \,. \end{aligned}$$

Let \(K_ \varepsilon \) be the function defined by

$$\begin{aligned} K_ \varepsilon (x)=(2 \pi )^{-\ell }\int _{\mathbb {R}^\ell }e^{i \xi \cdot x-\frac{\varepsilon }{2}|\xi |^2}\sqrt{\mu (\xi )}d \xi \end{aligned}$$

so that

$$\begin{aligned} \gamma _ \varepsilon (x)=\int _{\mathbb {R}^\ell }K_ \varepsilon (y)K_ \varepsilon (x-y)dy\,. \end{aligned}$$
(6.29)

Hence, we can write

$$\begin{aligned} \left( t_n\sum _{j,k=1}^m\int _0^{t_n}\gamma _ \varepsilon (B^j_{0,t_n}(s)-B^k_{0,t_n}(s))ds \right) ^{\frac{1}{2}} =\left( t_n\int _0^{t_n}\int _{\mathbb {R}^\ell }\left| \sum _{j=1}^m K_ \varepsilon (x-B^j_{0,t_n}(s)) \right| ^2 dxds \right) ^{\frac{1}{2}}\,. \end{aligned}$$

Let \(\mathcal D\) be the set of compactly supported continuous functions on \(\mathbb {R}^\ell \) with unit \(L^2(\mathbb {R}^{\ell })\)-norm. For every \(f\in \mathcal D\), applying Cauchy–Schwarz inequality, we see that the right-hand side in the equation above is at least

$$\begin{aligned} \sum _{j=1}^m\int _0^{t_n}\int _{\mathbb {R}^\ell }f(x) K_ \varepsilon \left( x-B^j_{0,t_n}(s)\right) dxds=\sum _{j=1}^m\int _0^{t_n}\bar{f}_ \varepsilon \left( B^j_{0,t_n}(s)\right) ds\,, \end{aligned}$$

where we have set

$$\begin{aligned} \bar{f}_ \varepsilon (x)=\int _{\mathbb {R}^\ell }f(y)K_ \varepsilon (y-x)dy\,. \end{aligned}$$

Using independence of Brownian motions, we obtain

$$\begin{aligned} \mathbb {E}_B Z_m(n)\ge \left( \mathbb {E}_B\left[ \exp \left\{ \lambda \int _0^{t_n}\bar{f}_ \varepsilon \left( B_{0,t_n}(s)\right) ds \right\} ;\tau _D\ge t_n \right] \right) ^m\,, \end{aligned}$$

where \(\tau _D := \inf \{s \ge 0: B(s) \notin D\}\). Applying Lemma 3.5 we obtain

$$\begin{aligned} \liminf _{n\rightarrow \infty }\frac{1}{mt_n}\log \mathbb {E}_B Z_m(n)\ge \sup _{g\in {\mathcal {G}}_D}\left\{ \lambda \int _D \bar{f}_ \varepsilon (x)g^2(x)dx-\frac{1}{2}\int _D|\nabla g(x)|^2dx \right\} \,. \end{aligned}$$

We now let \(D\uparrow \mathbb {R}^\ell \) to get

$$\begin{aligned} \liminf _{D\uparrow \mathbb {R}^\ell }\liminf _{n\rightarrow \infty }\frac{1}{mn^a }\log \mathbb {E}_B Z_m(n)\ge t^{1-a} \sup _{g\in {\mathcal {G}}}\left\{ \lambda \int _{\mathbb {R}^\ell } \bar{f}_ \varepsilon (x)g^2(x)dx-\frac{1}{2}\int _{\mathbb {R}^\ell }|\nabla g(x)|^2dx \right\} \,. \end{aligned}$$

We now link the variation on the right-hand side with \({\mathcal {M}}(\gamma )\) (cf. (3.4)) by observing that

$$\begin{aligned} \sup _{f \in \mathcal {D}}\sup _{g\in {\mathcal {G}}}\left\{ \lambda \int _{\mathbb {R}^\ell } \bar{f}_ \varepsilon (x)g^2(x)dx-\frac{1}{2}\int _{\mathbb {R}^\ell }|\nabla g(x)|^2dx \right\} ={\mathcal {M}}(\lambda ^2\gamma _ \varepsilon )\,. \end{aligned}$$
(6.30)

Indeed, for each fixed \(g\in {\mathcal {G}}\), applying Fubini’s theorem, Hahn-Banach theorem and (6.29), we have

$$\begin{aligned} \sup _{f\in \mathcal D} \int _{\mathbb {R}^\ell }\bar{f}_ \varepsilon (x)g^2(x)dx&=\sup _{f\in \mathcal D}\int _{\mathbb {R}^\ell }f(y)\int _{\mathbb {R}^\ell }K_ \varepsilon (y-x)g^2(x)dxdy\\&=\left( \int _{\mathbb {R}^\ell }\left| \int _{\mathbb {R}^\ell }K_ \varepsilon (y-x)g^2(x)dx \right| ^2dy \right) ^{\frac{1}{2}}\\&=\left( \int _{\mathbb {R}^\ell }\int _{\mathbb {R}^\ell }\gamma _ \varepsilon (x-y)g^2(x)g^2(y)dxdy \right) ^{\frac{1}{2}}\,. \end{aligned}$$

This leads us the identity (6.30). We can send \(\varepsilon \downarrow 0\), applying Lemma 3.2 and Proposition 3.3, to obtain (6.28) under hypothesis (H.2).

Step 3: Combining the inequalities (6.27) and (6.28) together, we have for every \(\lambda \in (0,\sqrt{2\ell })\)

$$\begin{aligned} \liminf _{n\rightarrow \infty }\sup _{j =1, \dots , N}\mathcal {K}_ {\varepsilon _n}(x_0,y_j)\ge \frac{4- \bar{\alpha }}{4} \lambda ^{\frac{4}{4- \bar{\alpha }}} \left( \frac{2t{\mathcal {E}}}{2- \bar{\alpha }} \right) ^{\frac{2- \bar{\alpha }}{4- \bar{\alpha }}}\,. \end{aligned}$$

Finally we let \(\lambda \rightarrow \sqrt{2\ell }^-\) to conclude the proof. \(\square \)

We now provide the proof of (6.26).

Lemma 6.4

For every \(\lambda \in (0,\sqrt{2\ell })\), we have

$$\begin{aligned} \lim _{n \rightarrow \infty }\eta _n^c(x_0) = 0 \quad \mathrm {a.s.} \end{aligned}$$
(6.31)

where we recall \(\eta _n^c\) is defined in (6.23).

Proof

Assuming first that (H.1) holds. We recall that \(\varepsilon _n=0\) in this case so that \(\gamma _{\varepsilon _n}=\gamma \). Let \(\mathcal {B}\) be the \(\sigma \)-field generated by the Brownian motions \(\{B^j\}_{1\le j \le m}\). First we will show that for any \(0< \rho < \frac{1}{2}\), we can find \(d>0\) sufficiently large so that on the event \(\{\min _{1\le j\le m}\tau ^j\ge t\}\), for every \(z,z'\in B(0,e^n)\) with \(|z -z'|\ge d\).

$$\begin{aligned} \text {Cov} \left( \xi _m(x_0,z), \xi _m(x_0,z') \Big |\mathcal {B}\right) \le \rho S_m^2\,. \end{aligned}$$
(6.32)

We recall that d and \(\tau ^j\) are defined in (6.13) and (6.16) respectively. We choose and fix \(\varkappa \in (0,1)\) such that

$$\begin{aligned} \varkappa \gamma (0) \le \frac{1}{2}\rho \inf _{|x|\le 2 r_0}\gamma (x) \,. \end{aligned}$$
(6.33)

Note that on the event \(\{\min _{1\le j\le m}\tau ^j\ge t\}\), we have \(\sup _{s\le t,j\le m}|B^j_{0,t}(s)|\le r_0\). Then for every \(j,k\le m\),

$$\begin{aligned}&\int _0^{\varkappa t}\gamma \left( B_{0,t}^j(t-s) - B_{0,t}^k(t-s)+ \frac{s}{t} (z-z')\right) ds \le \varkappa t \gamma (0)\\&\quad \le \frac{\rho }{2}\int _0^t\gamma \left( B_{0,t}^j(t-s) - B_{0,t}^k(t-s)\right) ds \,. \end{aligned}$$

In addition, from (1.3) and Riemann–Lebesgue lemma, \(\lim _{x\rightarrow \infty } \gamma (x)=0\). Hence, when \(s\in [\varkappa t , t]\), we can choose d large enough such that whenever \(|y|\le 2r_0\) and \(|z-z'|\ge d\)

$$\begin{aligned} \gamma (y+\frac{s}{t} (z-z') )\le \frac{\rho }{2} \gamma (y) \,. \end{aligned}$$

In particular, for every \(|z-z'|\ge d\) we have

$$\begin{aligned} \gamma \left( B_{0,t}^j(t-s) - B_{0,t}^k(t-s)+ \frac{s}{t} (z-z')\right) \le \frac{\rho }{2} \gamma \left( B_{0,t}^j(t-s) - B_{0,t}^k(t-s)\right) \,. \end{aligned}$$
(6.34)

It follows that

$$\begin{aligned}&\text {Cov}\left( \xi _m(t,z), \xi _m(t,z') \Big | \mathcal B\right) \\&\quad =\sum _{j, k=1}^m \int _0^t \gamma \left( B_{0,t}^j(t-s) - B_{0,t}^k(t-s) + \frac{s}{t} (z-z')\right) ds\\&\quad \le \rho \sum _{j, k=1}^m \int _0^t \gamma \left( B_{0,t}^j(t-s) - B_{0,t}^k(t-s) \right) ds\,, \end{aligned}$$

which verifies (6.32).

Since \(\lambda < \sqrt{2\ell }\), we can choose \(\kappa , \rho \in (0,\frac{1}{2})\) sufficiently small so

$$\begin{aligned} \frac{(1+2\rho )(\lambda +\kappa )^2}{2} < \ell \quad \text {and} \quad \frac{\kappa ^2}{4\rho } > \ell +1\,. \end{aligned}$$
(6.35)

Let us now recall Lemma 4.2 in [2]. For a mean zero n-dimensional Gaussian vector \((\xi _1, \cdots , \xi _n)\) with identically distributed components,

$$\begin{aligned} \max _{i \ne j} \frac{|\text {Cov} (\xi _i, \xi _j)|}{ \text {Var}(\xi _1)} \le \rho < \frac{1}{2} \end{aligned}$$
(6.36)

and for any \(A,B >0\), we have

$$\begin{aligned} \mathbb {P}\left\{ \max _{k \le n} \xi _k \le A \right\} \le \left( \mathbb {P}\left\{ \xi _1 \le \sqrt{1+2\rho } (A+B)\right\} \right) ^n + \mathbb {P}\left\{ U \ge B/\sqrt{2\rho \text {Var} (\xi _1)} \right\} \end{aligned}$$
(6.37)

where U is a standard normal random variable. Applying this inequality conditionally with \(A=\lambda S_m(t) \sqrt{n}\) and \(B= \kappa S_m \sqrt{n}\), we have for sufficiently large n,

$$\begin{aligned}&\mathbb {P}\left\{ \max _{j=1,\dots ,N} \xi _m(x_0,y_j) < \lambda \sqrt{n} S_m \Big | \mathcal {B} \right\} \\&\quad \le \left( \mathbb {P}\left\{ U \le \sqrt{1+2\rho } (\lambda +\kappa ) \sqrt{n}\right\} \right) ^{N} + \mathbb {P}\left\{ U \ge \frac{\kappa }{\sqrt{2\rho }} \sqrt{n} \right\} \\&\quad \le \exp \left\{ - (1+o(1)) C e^{vn} \right\} + e^{-(\ell +1)n} \le C e^{-(\ell +1)n}\,, \end{aligned}$$

where \(v>0\) is independent of n. Now for any \(\theta >0\), this yields

$$\begin{aligned} \mathbb {P}(\eta _n^c(x_0)\ge \theta )&\le \theta ^{-1}\mathbb {E}\eta _n^c(x_0)\\&=(\theta \mathbb {E}Z_m(n))^{-1}\mathbb {E}\left[ Z_m(n)\mathbb {P}\left\{ \max _{j=1,\dots ,N} \xi _m(x_0,y_j) < \lambda \sqrt{n} S_m \Big | \mathcal {B} \right\} \right] \\&\lesssim Ce^{-(\ell +1)n}\,. \end{aligned}$$

An application of Borel-Cantelli lemma yields (6.31) under hypothesis (H.1).

We now consider the hypothesis (H.2). The argument is similar to the previous case. There is, however, an additional scaling procedure. Recall that \(\mathcal {B}\) is the \(\sigma \)-field generated by the Brownian motions \(\{B^j\}_{1\le j \le m}\). We choose \(d=1\). It suffices to prove (6.32) on the event \(\{ \min _{0\le j\le m} \tau ^j\ge t \}\), for any \(|z-z'|\ge 1\). Indeed, we have

$$\begin{aligned} \text {Cov}\left( \xi _m(x_0,z), \xi _m(x_0,z')\Big | \mathcal B \right) =\sum _{j, k=1}^m \int _0^t \gamma _{\varepsilon _n} \left( B_{0,t}^j(t-s) - B_{0,t}^k(t-s) + \frac{s}{t} (z-z')\right) ds\,. \end{aligned}$$

For every \(j,k\le m\), using the scaling relation (1.24), we can write

$$\begin{aligned}&\gamma _{\varepsilon _n}\left( B_{0,t}^j(t-s) - B_{0,t}^k(t-s)+ \frac{s}{t} (z-z')\right) \\&\quad =\varepsilon _n^{-\frac{\alpha }{2}} \gamma _{1}\left( \varepsilon _n^{-\frac{1}{2}}(B_{0,t}^j(t-s) - B_{0,t}^k(t-s))+ \varepsilon _n^{-\frac{1}{2}}\frac{s}{t} (z-z')\right) \,. \end{aligned}$$

We now choose and fix \(\theta >0\) such that

$$\begin{aligned} \theta \le \frac{\rho }{2\gamma _1(0)}\inf _{x\in \varepsilon ^{-1/2} D} \gamma _1(x)\,, \end{aligned}$$
(6.38)

this is always possible since \(\gamma _1=p_{2}*\gamma \) is a strictly positive function. It follows that

$$\begin{aligned}&\varepsilon _n^{-\frac{\alpha }{2}}\int _0^{\theta t} \gamma _{1}\left( \varepsilon _n^{-\frac{1}{2}}(B_{0,t}^j(t-s) - B_{0,t}^k(t-s))+ \varepsilon _n^{-\frac{1}{2}}\frac{s}{t} (z-z')\right) ds\\&\quad \le \varepsilon _n^{-\frac{\alpha }{2}} \theta t \gamma _1(0)\\&\quad \le \frac{\rho }{2}\varepsilon _n^{-\frac{\alpha }{2}}\int _0^t\gamma _{1}\left( \varepsilon _n^{-\frac{1}{2}}(B_{0,t}^j(t-s) - B_{0,t}^k(t-s))\right) ds\\&\quad = \frac{\rho }{2}\int _0^t\gamma _{\varepsilon _n}\left( B_{0,t}^j(t-s) - B_{0,t}^k(t-s)\right) ds \,. \end{aligned}$$

In addition, on the event \(\{\min _{0\le j\le m} \tau ^j\ge t \}\), \(\varepsilon _n^{-\frac{1}{2}}(B_{0,t}^j(t-s)-B_{0,t}^k(t-s))\) belongs to \(2\varepsilon ^{-\frac{1}{2}}D\) for all \(s\in [0,t]\). Hence, for every \(s\in [\theta t,t]\) and \(|z-z'|\ge 1\), we have

$$\begin{aligned} \left| \varepsilon _n^{-\frac{1}{2}}(B_{0,t}^j(t-s)-B_{0,t}^k(t-s))+ \varepsilon _n^{-\frac{1}{2}}\frac{s}{t} (z-z')\right| \ge \theta \varepsilon _n^{-\frac{1}{2}}- 2 \varepsilon ^{-\frac{1}{2}} \mathrm {diag}(D)\,. \end{aligned}$$

We note that from Riemann–Lebesgue lemma, \(\lim _{x\rightarrow \infty } \gamma _1(x)=0\). Hence, whenever n is sufficiently large,

$$\begin{aligned} \gamma _1(y)\le \frac{\rho }{2}\inf _{x\in \varepsilon ^{-1/2} D}\gamma _1(x) \end{aligned}$$

for all \(|y|\ge \theta \varepsilon _n^{-\frac{1}{2}}- 2 \varepsilon ^{-\frac{1}{2}} \mathrm {diag}(D)\). It follows that for every \(z,z'\) with \(|z-z'|\ge 1\),

$$\begin{aligned}&\varepsilon _n^{-\frac{\alpha }{2}}\int _{\theta t}^t \gamma _{1}\left( \varepsilon _n^{-\frac{1}{2}}(B_{0,t}^j(t-s) - B_{0,t}^k(t-s))+ \varepsilon _n^{-\frac{1}{2}}\frac{s}{t} (z-z')\right) ds\\&\quad \le \varepsilon _n^{-\frac{\alpha }{2}}\frac{\rho }{2} \int _{\theta t}^t\gamma _{1}\left( \varepsilon _n^{-\frac{1}{2}}(B_{0,t}^j(t-s) - B_{0,t}^k(t-s))\right) ds\\&\quad \le \frac{\rho }{2} \int _{0}^t\gamma _{\varepsilon _n}\left( B_{0,t}^j(t-s) - B_{0,t}^k(t-s)\right) ds\,. \end{aligned}$$

Upon combining these estimates, we arrive at (6.32), which in turn, implies (6.31). \(\square \)

6.3 Proofs

Theorems 1.31.4 and 1.5 follow from the asymptotic results from the previous two subsections. Indeed, Theorem 1.3 follows by combining the upper bound in Theorem 6.1 and the lower bound in Theorem 6.3. To obtain Theorem 1.4, we first observe that from (4.1),

$$\begin{aligned} \frac{u(t,y)}{p_t*u_0(y)}\le \sup _{x \in \mathrm {supp}\,u_0} \frac{\mathcal {Z}(x; t,y)}{p_t(x-y)}\,. \end{aligned}$$
(6.39)

Then, an application of Theorem 6.1 yields the result. For Theorem 1.5, the upper bound of (1.25) follows from Remark 6.2 and the bound (6.39) with \(u,\mathcal {Z}\) replaced respectively by \(u_{\varepsilon }, \mathcal {Z}_ \varepsilon \), together with the obvious fact that \(\mathcal {E}_H(\gamma _{\varepsilon })\le \mathcal {E}_H(\gamma )\), see (3.3). The lower bound of (1.26) is immediate from Theorem 6.3.