Skip to main content
Log in

Abstract

We approximate the white-noise driven stochastic heat equation by replacing the fractional Laplacian by the generator of a discrete time random walk on the one dimensional lattice, and approximating white noise by a collection of i.i.d. mean zero random variables. As a consequence, we give an alternative proof of the weak convergence of the scaled partition function of directed polymers in the intermediate disorder regime, to the stochastic heat equation; an advantage of the proof is that it gives the convergence of all moments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Alberts, T., Khanin, K., Quastel, J.: The intermediate disorder regime for directed polymers in dimension \(1+1\). Ann. Probab. 42(3), 1212–1256 (2014)

    Article  MathSciNet  Google Scholar 

  2. Balázs, M., Rassoul-Agha, F., Seppäläinen, T.: The random average process and random walk in a space–time random environment in one dimension. Commun. Math. Phys. 266, 499–545 (2006)

    Article  MathSciNet  Google Scholar 

  3. Borovkov, A.A.: On the rate of convergence for the invariance principle. Theory Probab. Appl. 18, 207–225 (1973)

    Article  MathSciNet  Google Scholar 

  4. Caravenna, F., Sun, R., Zygouras, N.: Polynomial chaos and scaling limits of disordered systems. J. Eur. Math. Soc. (JEMS) 19(1), 1–65 (2017)

    Article  MathSciNet  Google Scholar 

  5. Chen, L., Dalang, R.C.: Moments, intermittency and growth indices for the nonlinear fractional stochastic heat equation. Stoch. Partial Differ. Equ. Anal. Comput. 3(3), 360–397 (2015)

    MathSciNet  MATH  Google Scholar 

  6. Comets, F.: Directed polymers in random environments. Volume 2175 of Lecture Notes in Mathematics. Springer, Cham, 2017. Lecture notes from the 46th Probability Summer School Held in Saint-Flour (2016)

  7. Comets, F., Yoshida, N.: Directed polymers in random environment are diffusive at weak disorder. Ann. Probab. 34(5), 1746–1770 (2006)

    Article  MathSciNet  Google Scholar 

  8. Conus, D., Joseph, M., Khoshnevisan, D., Shiu, S.-Y.: On the chaotic character of the stochastic heat equation, II. Probab. Theory Relat. Fields 156(3–4), 483–533 (2013)

    Article  MathSciNet  Google Scholar 

  9. Corwin, I.: The Kardar–Parisi–Zhang equation and universality class. Random Matrices Theory Appl. 1(1), 1130001 (2012)

    Article  MathSciNet  Google Scholar 

  10. den Hollander, F.: Random polymers. Volume 1974 of Lecture Notes in Mathematics. Springer, Berlin, 2009. Lectures from the 37th Probability Summer School Held in Saint-Flour (2007)

    Google Scholar 

  11. Durrett, R.: Probability: Theory and Examples. Cambridge Series in Statistical and Probabilistic Mathematics, 4th edn. Cambridge University Press, Cambridge (2010)

    Book  Google Scholar 

  12. Èbralidze, Š.S.: Inequalities for probabilities of large deviations in the multidimensional case. Theory Probab. Appl. 16, 733–741 (1971)

    Article  MathSciNet  Google Scholar 

  13. Foondun, M., Joseph, M., Li, S.-T.: An approximation result for a class of stochastic heat equations with colored noise. arXiv:1611.06829

  14. Foondun, M., Khoshnevisan, D.: Intermittence and nonlinear parabolic stochastic partial differential equations. Electron. J. Probab. 14(21), 548–568 (2009)

    Article  MathSciNet  Google Scholar 

  15. Foondun, M., Khoshnevisan, D.: An asymptotic theory for randomly forced discrete nonlinear heat equations. Bernoulli 18(3), 1042–1060 (2012)

    Article  MathSciNet  Google Scholar 

  16. Funaki, T.: Random motion of strings and related stochastic evolution equations. Nagoya Math. J. 89, 129–193 (1983)

    Article  MathSciNet  Google Scholar 

  17. Gnedenko, B.V., Kolmogorov, A.N.: Limit Distributions for Sums of Independent Random Variables. Addison-Wesley Publishing Company Inc, Cambridge, Translated and annotated by K. L. Chung. With an Appendix by J. L. Doob (1954)

  18. Gyöngy, I.: Lattice approximations for stochastic quasi-linear parabolic partial differential equations driven by space–time white noise I. Potential Anal. 9(1), 1–25 (1998)

    Article  MathSciNet  Google Scholar 

  19. Gyöngy, I.: Lattice approximations for stochastic quasi-linear parabolic partial differential equations driven by space–time white noise II. Potential Anal. 11(1), 1–37 (1999)

    Article  MathSciNet  Google Scholar 

  20. Heyde, C.C.: On large deviation probabilities in the case of attraction to a non-normal stable law. Sankhyā Ser. A 30, 253–258 (1968)

    MathSciNet  MATH  Google Scholar 

  21. Joseph, M., Khoshnevisan, D., Mueller, C.: Strong invariance and noise-comparison principles for some parabolic stochastic PDEs. Ann. Probab. 45(1), 377–403 (2017)

    Article  MathSciNet  Google Scholar 

  22. Kanagawa, S.: The rate of convergence for approximate solutions of stochastic differential equations. Tokyo J. Math. 12(1), 33–48 (1989)

    Article  MathSciNet  Google Scholar 

  23. Kolokoltsov, V.: Symmetric stable laws and stable-like jump-diffusions. Proc. Lond. Math. Soc. (3) 80(3), 725–768 (2000)

    Article  MathSciNet  Google Scholar 

  24. Kumar, R.: Space–time current process for independent random walks in one dimension. ALEA Lat. Am. J Probab. Math. Stat. 4, 307–336 (2008)

    MathSciNet  Google Scholar 

  25. Nagaev, S.V.: Large deviations of sums of independent random variables. Ann. Probab. 7(5), 745–789 (1979)

    Article  MathSciNet  Google Scholar 

  26. Osipov, L.V.: Asymptotic expansions in the central limit theorem. Vestnik Leningrad. Univ. 22(19), 45–62 (1967)

    MathSciNet  Google Scholar 

  27. Seppäläinen, T., Zhai, Y.: Hammersley’s harness process: invariant distributions and height fluctuations. Ann. Inst. Henri Poincaré Probab. Stat. 53(1), 287–321 (2017)

    Article  MathSciNet  Google Scholar 

  28. Spitzer, F.: Principles of Random Walks, 2nd edn. Springer, New York (1976). Graduate Texts in Mathematics, Vol. 34

Download references

Acknowledgements

The author thanks the referee for a careful reading of the paper and for several comments. He expresses his gratitude to Davar Khoshnevisan for providing the proof of Lemma 3.3. He also thanks David Applebaum and Timo Seppäläinen for comments on an earlier version of the paper. This work was done while the author was at the University of Sheffield, and he thanks the School of Mathematics and Statistics for a supportive environment. Partial support from the Engineering and Physical Sciences Research Council (EPSRC) through Grant EP/N028457/1 is gratefully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mathew Joseph.

Appendices

Appendix: A local limit theorem

The following local limit theorem is a modification of Proposition 3.3 in [13]. Below \(p_t\) is the density for the Stable(\(\alpha \)) process with generator \(-\nu (-\Delta )^{\alpha /2}\).

Theorem A.1

Suppose Assumption 1.1 holds. Then for any \(0\leqslant b\leqslant 1\) and any \(c>0\)

$$\begin{aligned}&\sup _{k \in \mathbf {Z}} \;\sup _{|x-(k-\mu [nt])|\leqslant cn^{(1-b)/\alpha }}\left| n^{\frac{1}{\alpha }} \mathrm {P}_{[nt]}(k) - p_{\frac{[nt]}{n}}\left( x n^{-\frac{1}{\alpha }}\right) \right| \nonumber \\&\quad \lesssim \frac{1}{n^{a/\alpha } t^{(1+a)/\alpha }} + \frac{1}{n^{b/\alpha }t^{2/\alpha }}, \end{aligned}$$
(A.1)

uniformly for \(1/n\leqslant t\leqslant T\).

Proof

To simplify notation we denote \(\tilde{t}=[nt]/n\). We first bound the expression on the left for \(x=k-\mu [nt]\). Fourier inversion gives

$$\begin{aligned} \begin{aligned} p_{\tilde{t}}(x) = \frac{1}{2\pi } \int _{\mathbf {R}} e^{-\mathrm {i} xz} e^{-\nu \tilde{t}|z|^\alpha } \mathrm{d}z, \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \mathrm {P}_{[nt]}(k) = \frac{1}{2\pi } \int _{-\pi }^{\pi } e^{-\mathrm {i} k z} \cdot \left[ \phi (z)\right] ^{[nt]}. \end{aligned}$$

Therefore

$$\begin{aligned}\begin{aligned}&(2\pi )\left| n^{\frac{1}{\alpha }} \mathrm {P}_{[nt]}(k) - p_{\tilde{t}}\left( (k-\mu [nt])n^{-\frac{1}{\alpha }}\right) \right| \\&\quad \leqslant \int _{ \left[ -\pi n^{1/\alpha },\pi n^{1/\alpha }\right] ^c}e^{-\nu \tilde{t}|z|^\alpha } \mathrm{d}z + \int _{-\pi n^{1/\alpha }}^{\pi n^{1/\alpha }} \left| e^{-\nu \tilde{t}|z|^\alpha }- \tilde{\phi }\left( \frac{z}{n^{1/\alpha }}\right) ^{[nt]}\right| \mathrm{d}z. \end{aligned} \end{aligned}$$

The first term can be bound just as term \(I_2\) in Proposition 3.3 of [13], and we get

$$\begin{aligned} \int _{ \left[ -\pi n^{1/\alpha },\pi n^{1/\alpha }\right] ^c}e^{-\nu \tilde{t}|z|^\alpha } \mathrm{d}z \lesssim \frac{1}{n^{a/\alpha } t^{(1+a)/\alpha }}. \end{aligned}$$

For the second term we split the region of integration depending on whether or not z is in

$$\begin{aligned} A_{t,n} :=\Big \{z\in \mathbf {R}; \, |z|\leqslant \frac{n^{a/\{\alpha (a+\alpha )\}}}{t^{1/(a+\alpha )}} \Big \}. \end{aligned}$$

Our assumptions on the characteristic function \(\phi \) imply

$$\begin{aligned} \Big \vert \tilde{\phi }\big (\frac{z}{n^{1/\alpha }}\big )\Big \vert \leqslant 1- c_1 \frac{|z|^\alpha }{n} \end{aligned}$$

on \(|z|\leqslant \pi n^{1/\alpha }\), and so it follows as in [13] that

$$\begin{aligned} \int _{A_{t,n}^c\cap [-\pi n^{1/\alpha },-\pi n^{1/\alpha }]} \left| e^{-\nu \tilde{t}|z|^\alpha }- \tilde{\phi }\left( \frac{z}{n^{1/\alpha }}\right) ^{[nt]}\right| \, \mathrm{d}z \lesssim \frac{1}{n^{a/\alpha } t^{(1+a)/\alpha }}. \end{aligned}$$

We next need to consider the above integrand over the region \(z \in A_{t,n}\). First observe

$$\begin{aligned} \begin{aligned} e^{-\nu \tilde{t}|z|^\alpha }- \tilde{\phi }\left( \frac{z}{n^{1/\alpha }}\right) ^{[nt]}&= e^{-\nu t|z|^\alpha } - \exp \left[ [nt] \log \tilde{\phi }\left( \frac{z}{n^{1/\alpha }}\right) \right] \\&=e^{-\nu t|z|^\alpha } \left\{ 1-\exp \left[ [nt] \log \tilde{\phi }\left( \frac{z}{n^{1/\alpha }}\right) +\nu \tilde{t}|z|^\alpha \right] \right\} \\&= e^{-\nu t|z|^\alpha } \left\{ 1-\exp \left[ [nt] \log \left( 1-\nu \frac{|z|^\alpha }{n}+\mathcal {D}\left( \frac{z}{n^{1/\alpha }}\right) \right) \right. \right. \\&\quad \left. \left. +\nu \tilde{t}|z|^\alpha \right] \right\} . \end{aligned} \end{aligned}$$

It is easy to see that \([nt]\mathcal {D}\big (z/n^{1/\alpha }\big )\) is bounded on \(A_{t,n}\), and so

$$\begin{aligned}\begin{aligned} \int _{A_{t,n}\cap [-\pi n^{1/\alpha },-\pi n^{1/\alpha }]} \left| e^{-\nu \tilde{t}|z|^\alpha }- \tilde{\phi }\left( \frac{z}{n^{1/\alpha }}\right) ^{[nt]}\right| \, \mathrm{d}z&\lesssim \int _{\mathbf {R}} e^{-\nu \tilde{t}|z|^\alpha } \cdot (nt) \left| \frac{z}{n^{1/\alpha }}\right| ^{a+\alpha } \\&\lesssim \frac{1}{n^{a/\alpha } t^{(1+a)/\alpha }}. \end{aligned} \end{aligned}$$

We thus have the required bound in the case \(x=k-\mu [nt]\). In the case of a general x such that \(|x-(k-\mu [nt])|\leqslant n^{(1-b)/\alpha }\) we have

$$\begin{aligned}\begin{aligned}&\left| p_{\tilde{t}}(xn^{-\frac{1}{\alpha }})-p_{\tilde{t}}((k-\mu [nt])n^{-\frac{1}{\alpha }})\right| \\&\quad \leqslant \int _{\mathbf {R}} \left| e^{-\mathrm {i}z(k-\mu [nt])n^{-\frac{1}{\alpha }}}- e^{-\mathrm {i}zxn^{-\frac{1}{\alpha }} } \right| e^{-\nu \tilde{t}|z|^\alpha }\, \mathrm{d}z \\&\quad \lesssim \int _{\mathbf {R}} 1\wedge \frac{|z|}{n^{b/\alpha }} \cdot e^{-\nu \tilde{t}|z|^\alpha } \mathrm{d}z \\&\quad \leqslant \frac{1}{n^{b/\alpha }t^{2/\alpha }}\int _{|w|\leqslant n^{b/\alpha }t^{1/\alpha }} |w| e^{-\nu |w|^\alpha } \mathrm{d}w +\frac{1}{t^{1/\alpha }} \int _{|w|> n^{b/\alpha }t^{1/\alpha }} e^{-\nu |w|^\alpha }\mathrm{d}w\\&\quad \lesssim \frac{1}{n^{b/\alpha }t^{2/\alpha }}. \end{aligned} \end{aligned}$$

This completes the proof. \(\square \)

The local limit theorem has the following consequence.

Corollary A.2

Let Assumption 1.1 hold. Suppose \(a_n\) is an integer valued sequence such that

$$\begin{aligned} \frac{a_n-\mu [nt]}{n^{1/\alpha }} \rightarrow a, \end{aligned}$$

as \(n \rightarrow \infty \) for some constant a. Then for fixed \(t>0\)

$$\begin{aligned} \frac{1}{n^{(\alpha -1)/\alpha }} \sum _{i=0}^{[nt]} \mathrm {P}_i(a_n) \rightarrow \int _0^t\frac{\mathrm{d}s}{s^{1/\alpha }}\, p_1\left( \frac{a}{s^{1/\alpha }}\right) \; \text {as } n \rightarrow \infty . \end{aligned}$$

Proof

We write

$$\begin{aligned} \begin{aligned} \frac{1}{n^{(\alpha -1)/\alpha }} \sum _{i=1}^{[nt]} \mathrm {P}_i(a_n)&= \frac{1}{n}\sum _{i=1}^{[nt]} n^{\frac{1}{\alpha }}\mathrm {P}_i(a_n) \\&= \frac{1}{n} \sum _{i=1}^{[nt]} \left[ p_{\frac{i}{n}}\left( \frac{a_n-\mu [nt]}{n^{1/\alpha }}\right) + \frac{n^{1/\alpha }}{i^{(1+a)/\alpha }}\right] \end{aligned} \end{aligned}$$

Use the scaling property: for any \(c>0\)

$$\begin{aligned} p_t(x) = c p_{c^\alpha t}(cx) \end{aligned}$$
(A.2)

to write the above as

$$\begin{aligned} \frac{1}{n} \sum _{i=1}^{[nt]} \frac{1}{(i/n)^{1/\alpha }}\cdot p_1\left( \left( \frac{n}{i}\right) ^{1/\alpha } \cdot \frac{a_n-\mu [nt]}{n^{1/\alpha }}\right) +o(1). \end{aligned}$$

The rest is a Riemann sum approximation. \(\square \)

Appendix: Bounds required for Proposition 3.5

The main results of this section are Lemmas B.4 and B.5 which were needed in the proof of Proposition 3.5.

Lemma B.1

The following holds

$$\begin{aligned} \sup _{n}\sup _{w\in \mathbf {Z}} \frac{[n^\theta ]}{n^{(\alpha -1)/\alpha }}\sum _{i=1}^{r-1} P\big (Y_{i[n^\theta ]}=w\big ) \leqslant \sup _{n} \frac{[n^\theta ]}{n^{(\alpha -1)/\alpha }}\sum _{i=1}^{r-1} P\big (Y_{i[n^\theta ]}=0\big ) <\infty . \end{aligned}$$

Proof

The first inequality is a simple consequence of the fact that

$$\begin{aligned} P(Y_j=w) = \frac{1}{2\pi }\int _{-\pi }^{\pi } e^{-\mathrm i w z} \vert \phi (z)\vert ^{2j} \mathrm{d}w \leqslant P(Y_j=0) \end{aligned}$$

for any w, since the characteristic function of \(Y_j\) is \(\vert \phi (z)\vert ^{2j}\). As for the finiteness of the sum observe that \(P(Y_j=0)\) is decreasing with j and therefore

$$\begin{aligned} {[}n^\theta ]\sum _{i=1}^{r-1} P\big (Y_{i[n^\theta ]}=0\big ) \leqslant \sum _{j=0}^n P(Y_j=0) \lesssim n^{(\alpha -1)/\alpha }, \end{aligned}$$

the last inequality following from (2.7). \(\square \)

It follows from Assumption 1.1 on the characteristic function that the distributions function F(x) of \(X_1\) belongs to the domain of normal attraction of the symmetric stable law with exponent \(\alpha \) (see section 35 of [17]). This means that one has to scale the centered \(X_n-\mu n\) by a constant multiple of \(n^{1/\alpha }\). One can characterize such distribution functions.

Lemma B.2

([17]) A necessary and sufficient condition for F to be in the domain of normal attraction of a Stable(\(\alpha \)) law, \(0<\alpha <2\), is the existence of contants \( c_1,c_2\geqslant 0,\, c_1+c_2>0\) such that

$$\begin{aligned} F(x)&= \big (c_1 + f_1(x)\big ) \frac{1}{|x|^\alpha }, \qquad \text {for } x<0, \\ F(x)&= 1-\big (c_2+ f_2(x)\big ) \frac{1}{|x|^\alpha }, \;\;\text {for } x>0, \end{aligned}$$

where the functions \(f_1\) and \(f_2\) satisfy

$$\begin{aligned} \lim _{x \rightarrow -\infty } f_1(x) = \lim _{x\rightarrow \infty } f_2(x)=0. \end{aligned}$$

We use the above lemma to deduce the following large deviation estimate.

Lemma B.3

There exists a constant \(c_3>0\) such that

$$\begin{aligned} P \big (\vert X_{[nt]-r[n^\theta ]}\vert \geqslant c_3n^\theta \big )\lesssim \frac{1}{n^{(\alpha -1)\theta }}. \end{aligned}$$

Proof

For \(1<\alpha <2\) we use the result in [20]. This gives for \(c_3>|\mu |\)

$$\begin{aligned} P(|X_{[nt]-r[n^\theta ]}| \geqslant c_3n^\theta )&\lesssim n^\theta P(|X_1| \geqslant c_3 n^\theta )\\&\lesssim \frac{n^\theta }{n^{\alpha \theta }}, \end{aligned}$$

from the previous lemma. For \(\alpha =2\), one can use Exercise 3.3.19 in [11] to conclude that \(X_1\) has second moments. One then uses Chebyschev’s inequality to prove the lemma. \(\square \)

For the rest of this section we provide the proofs of the bounds required for the two terms in (3.14). Below is the bound on the first term.

Lemma B.4

The first term in (3.14) has the bound

$$\begin{aligned} \frac{1}{n^{(\alpha -1)/\alpha }} \Big \vert \sum _{i,j} \sum _{k,l} \mathrm {P}_j(l) \cdot \Big [ \mathrm {P}_j(l)-\mathrm {P}_{i[n^\theta ]}\big (k[n^\gamma ]\big )\Big ]\Big \vert \lesssim \frac{1}{n^{(\alpha -1)\theta }} + \frac{n^{\theta +o(1)}}{n^{(\alpha -1)/\alpha }}, \end{aligned}$$

where the limits of the summation are as in (3.12).

Proof

We write

$$\begin{aligned} \begin{aligned}&\frac{1}{n^{(\alpha -1)/\alpha }} \sum _{i,j} \sum _{k,l} \mathrm {P}_j(l) \cdot \Big [ \mathrm {P}_j(l)-\mathrm {P}_{i[n^\theta ]}\big (k[n^\gamma ]\big )\Big ] \\&\quad = \frac{1}{n^{(\alpha -1)/\alpha }} \sum _{i,j} P(X_j=\tilde{X}_j) \\&\qquad - \frac{1}{n^{(\alpha -1)/\alpha }}\sum _{i,j}\sum _{k,l}\sum _{w\in \mathbf {Z}} P\left( X_{i[n^\theta ]}=l-w,\, \tilde{X}_{[in^\theta ]}=k[n^\gamma ]\right) \cdot \mathrm {P}_{j-[in^\theta ]}(w), \end{aligned} \end{aligned}$$
(B.1)

the equality holding by an application of the Markov property. We focus on the second term for now, we will return to the first term later. We split the sum according to whether \(i=0\) or not. The \(i=0\) term is of order \(n^{\theta }n^{-(\alpha -1)/\alpha }\). For the term corresponding to \(i\geqslant 1\) replace \(\tilde{X}_{i[n^\theta ]} = k[n^\gamma ]\) by \(\tilde{X}_{i[n^\theta ]}=l\) by using Theorem A.1 with \(b=1-\alpha \gamma \) to obtain

$$\begin{aligned} \begin{aligned}&\frac{1}{n^{(\alpha -1)/\alpha }}\sum _{i,j}\sum _{k,l} \sum _w P\left( X_{i[n^\theta ]}=l-w,\, \tilde{X}_{[in^\theta ]}=k[n^\gamma ]\right) \cdot \mathrm {P}_{j-[in^\theta ]}(w) \\&\quad =O\left( \frac{n^\theta }{n^{(\alpha -1)/\alpha }}\right) + \frac{1}{n^{(\alpha -1)/\alpha }}\sum _{i=1}^{r-1}\sum _{j}\left[ O\left( \frac{1}{(in^\theta )^{(1+a)/\alpha }} \right) + O\left( \frac{n^\gamma }{(in^\theta )^{2/\alpha }}\right) \right] \\&\qquad +\frac{1}{n^{(\alpha -1)/\alpha }}\sum _{i=1}^{r-1}\sum _{j}\sum _{k,l} \sum _{w} P\left( X_{i[n^\theta ]}=l-w, \tilde{X}_{[in^\theta ]}=l\right) \cdot \mathrm {P}_{j-[in^\theta ]}(w) \\&\quad =O\left( \frac{n^\theta }{n^{(\alpha -1)/\alpha }}\right) +O\left( \frac{n^{o(1)}}{n^{\min (a, \alpha -1)/\alpha }}\right) \\&\qquad +\frac{1}{n^{(\alpha -1)/\alpha }}\sum _{i=1}^{r-1}\sum _{j}\sum _{w} P\left( X_{i[n^\theta ]}-\tilde{X}_{i[n^\theta ]}=w\right) \cdot \mathrm {P}_{j-[in^\theta ]}(w). \end{aligned} \end{aligned}$$

Returning to our expression (B.1) we now can write it as

$$\begin{aligned} \begin{aligned}&\frac{1}{n^{(\alpha -1)/\alpha }} \sum _{i,j} P(X_j=\tilde{X}_j) -\frac{1}{n^{(\alpha -1)/\alpha }}\sum _{i=1}^{r-1} \sum _j\sum _{|w|\leqslant c_3n^\theta } P\left( X_{i[n^\theta ]}-\tilde{X}_{i[n^\theta ]}=w\right) \\&\quad \cdot \mathrm {P}_{j-[in^\theta ]}(w) \\&\quad +O\left( \frac{1}{n^{(\alpha -1)\theta }}\right) +O\left( \frac{n^\theta }{n^{(\alpha -1)/\alpha }}\right) +O\left( \frac{n^{o(1)}}{n^{\min (a, \alpha -1)/\alpha }}\right) , \end{aligned} \end{aligned}$$

where we have used Lemma B.1 and Lemma B.3 with the constant \(c_3\) from there. Let us now consider each of the first two terms above. Using Theorem A.1 as well as (A.2) one gets for the first term

$$\begin{aligned} \frac{1}{n^{(\alpha -1)/\alpha }} \sum _{i,j} P(X_j=\tilde{X}_j) = \frac{\alpha \tilde{p}_1(0) }{\alpha -1} \cdot \left( \frac{r[n^\theta ]-1}{n}\right) ^{(\alpha -1)/\alpha } + O\left( \frac{n^{o(1)}}{n^{\min (a, \,\alpha -1)/\alpha }}\right) , \end{aligned}$$

where \(\tilde{p}_1(\cdot )\) is the transition kernel of \(-2\nu (-\Delta )^{\alpha /2}\). Using Theorem A.1 again with \(b=1-\alpha \theta \), we get for \(|w|\leqslant c_3n^\theta \)

$$\begin{aligned}&\frac{[n^\theta ]}{n^{(\alpha -1)/\alpha }}\sum _{i=1}^{r-1}\sum _{|w|\leqslant c_3[n^\theta ]} P\left( X_{i[n^\theta ]}-\tilde{X}_{i[n^\theta ]}=w\right) \cdot \mathrm {P}_{j-i[n^\theta ]}(w) \\&\quad =\frac{[n^\theta ]}{n^{(\alpha -1)/\alpha }}\sum _{i=1}^{r-1}\left[ \frac{\tilde{p}_1(0)}{(i[n^\theta ])^{1/\alpha }}+ O\left( \frac{1}{(in^\theta )^{(1+a)/\alpha }}\right) +O\left( \frac{n^\theta }{(in^\theta )^{2/\alpha }}\right) \right] \\&\quad = \frac{\alpha \tilde{p}_1(0) }{\alpha -1} \cdot \left( \frac{r[n^\theta ]-1}{n}\right) ^{(\alpha -1)/\alpha } +O\left( \frac{n^{\theta (\alpha -1)/\alpha }}{n^{(\alpha -1)/\alpha }}\right) \\&\qquad + O\left( \frac{n^{o(1)}}{n^{\min (a,\, \alpha -1)/\alpha }}\right) + O\left( \frac{n^{\theta +o(1)}}{n^{(\alpha -1)/\alpha }}\right) . \end{aligned}$$

Collecting all our estimates and recalling our conditions (3.2) on \(\gamma \) and \(\theta \) completes the proof. \(\square \)

We next bound the second term in (3.14).

Lemma B.5

The second term in (3.14) has the bound

$$\begin{aligned} \frac{1}{n^{(\alpha -1)/\alpha }}\Big \vert \sum _{i,j} \sum _{k,l} \mathrm {P}_{i[n^\theta ]}\big (k[n^\gamma ]\big )\cdot \Big [ \mathrm {P}_{i[n^\theta ]}\big (k[n^\gamma ]\big )-\mathrm {P}_j(l)\Big ]\Big \vert \lesssim \frac{n^{\gamma }}{n^{(\alpha -1)\theta }}+ \frac{n^{\theta +\gamma +o(1)}}{n^{(\alpha -1)/\alpha }} \end{aligned}$$

where the limits in the summation are as in (3.14).

Proof

We separate out the \(i=0\) term and obtain

$$\begin{aligned} \begin{aligned}&\frac{1}{n^{(\alpha -1)/\alpha }} \sum _{i,j} \sum _{k,l} \mathrm {P}_{i[n^\theta ]}\big (k[n^\gamma ]\big )\cdot \Big [ \mathrm {P}_{i[n^\theta ]}\big (k[n^\gamma ]\big )-\mathrm {P}_j(l)\Big ] \\&\quad = O \left( \frac{n^{\theta +\gamma }}{n^{(\alpha -1)/\alpha }}\right) + \frac{1}{n^{(\alpha -1)/\alpha }} \sum _{i=1}^{r-1} \sum _{j, k,l} \sum _w \mathrm {P}_{i[n^\theta ]}\big (k[n^\gamma ]\big )\\&\qquad \cdot \Big [ \mathrm {P}_{i[n^\theta ]}\big (k[n^\gamma ]\big )-\mathrm {P}_{i[n^\theta ]}(l-w)\Big ]\cdot \mathrm {P}_{j-i[n^\theta ]}(w) \end{aligned} \end{aligned}$$

The reason for the error bound for the first term is that restricting \(i=0\) forces \(k=0\), and the number of terms in the summation over j and l is of order \(n^{\theta +\gamma }\). As in Lemma B.4 we split the sum over w according to whether \(|w|\leqslant c_3[n^\theta ]\) or not. In the case \(|w|\leqslant c_3[n^\theta ]\) we use Theorem A.1 with \(b=1-\alpha \theta \). Thus the above is

$$\begin{aligned} \begin{aligned}&=O \left( \frac{n^{\theta +\gamma }}{n^{(\alpha -1)/\alpha }}\right) +\frac{n^{\gamma +\theta }}{n^{(\alpha -1)/\alpha }} \sum _{i=1}^{r-1} \left[ O\left( \frac{1}{(in^\theta )^{(1+a)/\alpha }}\right) + O\left( \frac{n^\theta }{(in^\theta )^{2/\alpha }}\right) \right] \\&\quad + \frac{1}{n^{(\alpha -1)/\alpha }}\sum _{i=1}^{r-1} \sum _{j,k,l}\sum _{|w|\geqslant c_3n^\theta } \mathrm {P}_{i[n^\theta ]}(k[n^\gamma ])\cdot \left[ \mathrm {P}_{i[n^\theta ]}(k[n^\gamma ]) -\mathrm {P}_{i[n^\theta ]}(l-w)\right] \\&\quad \cdot \mathrm {P}_{j-i[n^\theta ]}(w) \\&= O\left( \frac{n^{\gamma +o(1)}}{n^{\min (a, \alpha -1)/\alpha }}\right) + O \left( \frac{n^{\gamma +\theta +o(1)}}{n^{(\alpha -1)/\alpha }}\right) \\&\quad + \frac{1}{n^{(\alpha -1)/\alpha }}\sum _{i=1}^{r-1} \sum _{j,k,l}\sum _{|w|\geqslant c_3n^\theta } \mathrm {P}_{i[n^\theta ]}(k[n^\gamma ])\cdot \left[ \mathrm {P}_{i[n^\theta ]}(k[n^\gamma ]) -\mathrm {P}_{i[n^\theta ]}(l-w)\right] \\&\quad \cdot \mathrm {P}_{j-i[n^\theta ]}(w) \end{aligned} \end{aligned}$$

To bound the last term we ignore the difference in the expression and instead bound the sum. By Lemmas B.1 and B.3

$$\begin{aligned} \frac{1}{n^{(\alpha -1)/\alpha }}\sum _{i=1}^{r-1} \sum _{j,k,l}\sum _{|w|\geqslant c_3n^\theta } \mathrm {P}^2_{i[n^\theta ]}(k[n^\gamma ])\cdot \mathrm {P}_{j-i[n^\theta ]}(w) \lesssim \frac{n^{\gamma }}{n^{(\alpha -1)\theta }}, \end{aligned}$$

and

$$\begin{aligned} \begin{aligned}&\frac{1}{n^{(\alpha -1)/\alpha }}\sum _{i=1}^{r-1} \sum _{j,k,l}\sum _{|w|\geqslant c_3n^\theta } \mathrm {P}_{i[n^\theta ]}(k[n^\gamma ])\cdot \mathrm {P}_{i[n^\theta ]}(l-w)\cdot \mathrm {P}_{j-i[n^\theta ]}(w) \\&\quad \lesssim \frac{1}{n^{(\alpha -1)/\alpha }}\sum _{i=1}^{r-1}\sum _j\sum _{y=0}^{[n^\gamma ]-1}\sum _{|w|\geqslant c_3n^\theta } P\left( \tilde{X}_{i[n^\theta ]}- X_{i[n^\theta ]} = y-w\right) \cdot \mathrm {P}_{j-i[n^\theta ]}(w) \\&\quad \lesssim \frac{1}{n^{(\alpha -1)/\alpha }}\sum _{i=1}^{r-1}\sum _j\sum _{y=0}^{[n^\gamma ]-1}\sum _{|w|\geqslant c_3n^\theta } P\left( \tilde{X}_{i[n^\theta ]}- X_{i[n^\theta ]} = 0\right) \cdot \mathrm {P}_{j-i[n^\theta ]}(w) \\&\quad \lesssim \frac{n^{\gamma }}{n^{(\alpha -1)\theta }}, \end{aligned} \end{aligned}$$

where we used Lemma B.1 in the last step. Collecting our estimates completes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Joseph, M. An invariance principle for the stochastic heat equation. Stoch PDE: Anal Comp 6, 690–745 (2018). https://doi.org/10.1007/s40072-018-0118-9

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40072-018-0118-9

Keywords

Mathematics Subject Classification

Navigation