1 Introduction to the series

In this two-paper series, we study the renormalized wave equation with a Hartree nonlinearity and random initial data given by

$$\begin{aligned} {\left\{ \begin{array}{ll} -\partial _{tt}^2 u - u +\Delta u = \, :(V*u^2) u :\qquad (t,x)\in {\mathbb {R}}\times {\mathbb {T}}^3, \\ u|_{t=0} = \phi _0, \quad \partial _t u |_{t=0} = \phi _1. \end{array}\right. } \end{aligned}$$
(a)

Here, \({\mathbb {T}}\overset{\text {def}}{=}{\mathbb {R}}/ 2\pi {\mathbb {Z}}\) is the torus and the interaction potential \(V :{\mathbb {T}}^3 \rightarrow {\mathbb {R}}\) is of the form \(V(x)= c_\beta |x|^{-(3-\beta )}\) for all small \(x\in {\mathbb {T}}^3\), where \(0<\beta <3\), satisfies \(V(x) > rsim 1\) for all \(x\in {\mathbb {T}}^3\), is even, and is smooth away from the origin. The nonlinearity \(:(V*u^2) u :\) is a renormalization of \((V*u^2) u\) (see Definition 2.6 below).

The nonlinear wave equation (a) is a prototypical example of a Hamiltonian partial differential equation. The formal Hamiltonian is given by

$$\begin{aligned} H[u,\partial _t u](t)= & {} \frac{1}{2} \Big ( \Vert u(t) \Vert _{L_x^2}^2 + \Vert \nabla u(t) \Vert _{L_x^2}^2 + \Vert \partial _t u(t) \Vert _{L_x^2}^2 \Big ) \\&+\, \frac{1}{4} \int _{{\mathbb {T}}^3} :(V*u^2)(t,x) u(t,x)^2 :\,{\mathrm {d}}x, \end{aligned}$$

where \(L^2_x=L_x^2({\mathbb {T}}^3)\). Based on the Hamiltonian structure, we expect the formal Gibbs measure \(\mu ^{\otimes }\) given by

$$\begin{aligned} {\mathrm {d}} \mu ^{\otimes } (\phi _0,\phi _1) = {\mathcal {Z}}^{-1} \exp (- H(\phi _0,\phi _1) ) \, {\mathrm {d}}\phi _0 {\mathrm {d}}\phi _1 \end{aligned}$$
(b)

to be invariant under the flow of (a), where \({\mathcal {Z}}\) is a normalization constant.

The first part of this series focuses on the rigorous construction and properties of \(\mu ^{\otimes }\). With a primary focus on the related \(\Phi ^4_d\)-models, similar constructions have been studied in constructive quantum field theory. Recently, this area of research has been revitalized through advances in singular stochastic partial differential equations. The main difficulties come from the quartic interaction \(:(V *u^2) u^2:\) in the Hamiltonian. In fact, without the interactions, one obtains the Gaussian free field

which can be constructed through elementary arguments. Using our representation of the Gibbs measure \(\mu ^{\otimes }\), we also prove that \(\mu ^{\otimes }\) and are mutually singular for \({0<\beta <1/2}\).

In the second part of this series, we study the dynamics of (a) with random initial data drawn from the Gibbs measure \(\mu ^{\otimes }\). Due to the low spatial regularity, the local theory requires a mix of techniques from dispersive equations, harmonic analysis, and probability theory. More specifically, we rely on ideas from the para-controlled calculus of Gubinelli, Imkeller, and Perkowski [20]. The heart of this series, however, lies in the global theory. Our main contribution is a new form of Bourgain’s globalization argument [7], which addresses the singularity of the Gibbs measure and its consequences.

We now state an qualitative version our main theorem, which combines our measure-theoretic and dynamical results. For the quantitative version, we refer the reader to Theorem 1.1 below and Theorem 1.3 in the second part of this series. We recall that the parameter \({0<\beta <3}\) determines the regularity of the interaction potential V.

Main Theorem

(Global well-posedness and invariance, qualitative version). The formal Gibbs measure \(\mu ^{\otimes }\) exists and, for \(0<\beta <1/2\), is singular with respect to the Gaussian free field . The renormalized wave equation with Hartree nonlinearity (a) is globally well-posed on the support of \(\mu ^{\otimes }\) and the dynamics leave \(\mu ^{\otimes }\) invariant.

This is the first example of an invariant Gibbs measure for a dispersive equation which is singular with respect to the Gaussian free field .

2 Introduction

In the first paper of this series, we rigorously construct and study the formal Gibbs measure \(\mu ^{\otimes }\) from (b) above. Since the Hamiltonian \(H[\phi _0,\phi _1]\) splits into a sum of functions in \(\phi _0\) and \(\phi _1\), we can rewrite (b) as

$$\begin{aligned}&{\mathrm {d}} \mu ^{\otimes } (\phi _0,\phi _1) \\&\quad = {\mathcal {Z}}^{-1}_0 \exp \Big ( - \frac{1}{4} \int _{{\mathbb {T}}^3} :( V*\phi _0^2) \phi _0^2 :\,{\mathrm {d}}x- \frac{1}{2} \Vert \phi _0 \Vert _{L^2}^2 \\&\qquad - \frac{1}{2} \Vert \nabla \phi _0 \Vert _{L^2}^2 \Big ) \, {\mathrm {d}} \phi _0 \, \otimes \, {\mathcal {Z}}^{-1}_1 \exp \Big ( - \frac{1}{2} \Vert \phi _1\Vert _{L^2}^2 \Big ) \, {\mathrm {d}} \phi _1. \end{aligned}$$

The construction and properties of the second factor are elementary (as will be explained below), and we now focus on the first factor. As a result, we are interested in the rigorous construction of a measure \(\mu \) which is formally given by

$$\begin{aligned} {\mathrm {d}}\mu (\phi ) = {\mathcal {Z}}^{-1} \exp \Big ( - \frac{1}{4} \int _{{\mathbb {T}}^3} :( V*\phi ^2) \phi ^2 :\,{\mathrm {d}}x- \frac{1}{2} \Vert \phi \Vert _{L^2({\mathbb {T}}^3)}^2 - \frac{1}{2} \Vert \nabla \phi \Vert _{L^2({\mathbb {T}}^3)}^2 \Big ) \, {\mathrm {d}} \phi .\nonumber \\ \end{aligned}$$
(1.1)

Our Gibbs measure \(\mu \) is closely related to the \(\Phi ^4_d\)-models, which replace the three-dimensional torus \({\mathbb {T}}^3\) by the more general d-dimensional torus \({\mathbb {T}}^d\) and replace the integrand \(:(V*\phi ^2) \phi ^2 :\) by the renormalized quartic power \(:\phi ^4 :\). Thus, the \(\Phi ^4_d\)-model is formally given by

$$\begin{aligned} {\mathrm {d}}\Phi ^4_d(\phi ) = {\mathcal {Z}}^{-1} \exp \Big ( - \frac{1}{4} \int _{{\mathbb {T}}^d} :\phi ^4 :\,{\mathrm {d}}x- \frac{1}{2} \Vert \phi \Vert _{L^2({\mathbb {T}}^d)}^2 - \frac{1}{2} \Vert \nabla \phi \Vert _{L^2({\mathbb {T}}^d)}^2 \Big ) \, {\mathrm {d}} \phi .\nonumber \\ \end{aligned}$$
(1.2)

Aside from their connection to Hamiltonian PDEs, such as nonlinear wave and Schrödinger equations, the \(\Phi ^4_d\)-models are of independent interest in quantum field theory (cf. [18]). In most rigorous constructions of measures such \(\mu \) or the \(\Phi ^4_d\)-models, the first step consists of a regularization. For instance, one may insert a frequency-truncation in the nonlinearity or replace the continuous spatial domain by a discrete lattice. In a second step, one then proves the convergence of the regularized measures as the regularization is removed, either by direct estimates or compactness arguments.

With a particular focus on \(\Phi ^4_d\)-models, the question of convergence of the regularized measures has been extensively studied over several decades. The first proof of convergence was a major success of the constructive field theory program, which thrived during the 1970s and 1980s. We refer the reader to the excellent introduction of [19] for a detailed overview and the original works [2, 4, 17, 21, 28, 37, 39, 42].

In the 1990s, Bourgain [7,8,9] revisited the \(\Phi ^4_d\)-model in dimension \(d=1,2\) using tools from harmonic analysis and introduced these problems into the dispersive PDE community. Bourgain’s works [7,8,9] also contain important dynamical insights, which will be utilized in the second part of this series.

Based on the method of stochastic quantization, which was introduced by Nelson [31, 32] and Parisi-Wu [38], the construction and properties of the \(\Phi ^4_d\)-models have also been studied over the last twenty years in the stochastic PDE community. The main idea behind stochastic quantization is that the \(\Phi ^4_d\)-measure is formally invariant under the stochastic nonlinear heat equation

$$\begin{aligned} \partial _t u + u - \Delta u = - :u^3 :+ \sqrt{2} \xi \qquad (t,x)\in {\mathbb {R}}\times {\mathbb {T}}^d, \end{aligned}$$
(1.3)

where \(\xi \) is space-time white noise. After prescribing simple initial data, such as \(u(0)=0\), one hopes to obtain the \(\Phi ^4_d\)-measure as the limit of the law of u(t) as \(t\rightarrow \infty \). In spatial dimensions \(d=1,2\), this approach was carried out by Iwata [27] and Da Prato-Debussche [15], respectively. In spatial dimension \(d=3\), however, (1.3) is highly singular and the local well-posedness theory of (1.3) is beyond classical methods in stochastic partial differential equations. In groundbreaking work [25], Hairer introduced regularity structures, which provide a detailed description of the local dynamics of (1.3). Alternatively, the local well-posedness of (1.3) was also obtained by Catellier and Chouk in [12], which is based on the para-controlled calculus of Gubinelli, Imkeller, and Perkowski [20]. In order to construct the \(\Phi ^4_3\)-model using (1.3), however, local control over the solution is not sufficient, and one needs a global well-posedness theory. The global theory has been addressed very recently in [1, 19, 26, 29], which combine regularity structures or para-controlled calculus with further PDE arguments, such as the energy method. Using similar tools, Barashkov and Gubinelli [5, 6] recently developed a variational approach to the \(\Phi ^4_3\)-model, which does not directly rely on the stochastic heat equation (1.3). Their work forms the basis of this paper and will be discussed in more detail below.

After this broad overview of the relevant literature, we now begin a more detailed discussion of the previous methods. Throughout this discussion we encourage the reader to think of the nonlinear wave equation as a Hamiltonian system of ordinary differential equations in Fourier space. We begin with the elementary construction of the Gaussian free field. Then, we discuss the construction of the \(\Phi ^4_1\) and \(\Phi ^4_2\)-models using harmonic analysis, similar as in Bourgain’s works [7, 8], and the construction of the \(\Phi ^4_3\)-model using the variational approach of Barashkov and Gubinelli [5].

Given a function \(\phi :{\mathbb {T}}^d \rightarrow {\mathbb {R}}\), its Fourier expansion is given by

$$\begin{aligned} \phi (x) = \sum _{n\in {\mathbb {Z}}^d} {\widehat{\phi }}(n) e^{i\langle n , x \rangle }. \end{aligned}$$
(1.4)

Due to the real-valuedness of \(\phi \), the sequence \(({\widehat{\phi }}(n))_{n \in {\mathbb {Z}}^d}\) satisfies the symmetry condition \(\overline{{\widehat{\phi }}}(n)= {\widehat{\phi }}(-n)\). In order to respect this symmetry, we let \(\Lambda \subseteq {\mathbb {Z}}^d\) be such that \({\mathbb {Z}}^d = \{ 0 \} \, \biguplus \, \Lambda \, \biguplus \, (-\Lambda )\), where \(\biguplus \) denotes the disjoint union. For \(n\in \Lambda \), we denote by \({\mathrm {d}}{\widehat{\phi }}(n)\) the Lebesgue measure on \({\mathbb {C}}\), and for \(n=0\), we denote by \({\mathrm {d}}{\widehat{\phi }}(0)\) the Lebesgue measure on \({\mathbb {R}}\). We can then formally identify the d-dimensional Gaussian free field

$$\begin{aligned} {{\mathrm {d}}}{\mathscr {g}}_d(\phi ) ={\mathcal {Z}}^{-1} \exp \Big ( - \frac{1}{2} \Vert \phi \Vert _{L^2({\mathbb {T}}^d)}^2 - \frac{1}{2} \Vert \nabla \phi \Vert _{L^2({\mathbb {T}}^d)}^2 \Big ) \, {\mathrm {d}} \phi \end{aligned}$$
(1.5)

as the push-forward under the Fourier transform of

$$\begin{aligned} \begin{aligned}&{\mathcal {Z}}^{-1} \exp \Big ( - \frac{1}{2} \sum _{n\in {\mathbb {Z}}^d} (1+|n|^2) |{\widehat{\phi }}(n)|^2 \Big ) \bigotimes _{n\in \{ 0 \} \cup \Lambda } {\mathrm {d}} {\widehat{\phi }}(n) \\&\quad = \frac{1}{2\pi } \exp \Big ( - \frac{1}{2} |{\widehat{\phi }}(0)|^2 \Big ) {\mathrm {d}} {\widehat{\phi }}(0) \otimes \Big ( \bigotimes _{n\in \Lambda } \frac{1}{\pi \langle n \rangle ^2} \exp \Big ( - \langle n \rangle ^2 |{\widehat{\phi }}(n)|^2 \Big ) {\mathrm {d}} {\widehat{\phi }}(n) \Big ), \end{aligned} \end{aligned}$$
(1.6)

where \(\langle n \rangle ^2 = 1 + |n|^2 \). While (1.5) is entirely formal, the right-hand side of (1.6) is a well-defined product measure. Under the measure in (1.6), \({\widehat{\phi }}(0)\) is a standard real-valued Gaussian and \( ( {\widehat{\phi }}(n))_{n\in \Lambda } \) is a sequence of independent complex Gaussians satisfying \({\mathbb {E}}|{\widehat{\phi }}( n )|^2 = \langle n \rangle ^{-2}\). Turning this formal discussion around, we let \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) be an ambient probability space containing a sequence of independent complex-valued standard Gaussians \((g_n)_{n \in \Lambda }\) and a standard real-valued Gaussian \(g_0\). Then, we can rigorously define the Gaussian free field \({\mathscr {g}}_d\) by

$$\begin{aligned} {{\mathrm {d}}}{\mathscr {g}}_d(\phi ) = \Big ( \sum _{n\in {\mathbb {Z}}^d} \frac{g_n}{\langle n\rangle } e^{i\langle n , x \rangle }\Big )_{\#} {\mathbb {P}}, \end{aligned}$$
(1.7)

where the subscript \(\#\) denotes the pushforward. Using the representation (1.7), we see that a typical sample of \({\mathscr {g}}_d\) almost surely lies in \(H_x^s({\mathbb {T}}^d)\) for all \(s < 1 - d/2\) but not in \(H_x^{1-d/2}({\mathbb {T}}^d)\).

We now turn to the construction of the \(\Phi ^4_1\) and \(\Phi ^4_2\)-models. Based on our formal expression of the \(\Phi ^4_1\)-model in (1.2), we would like to define

$$\begin{aligned} {\mathrm {d}} \Phi ^4_1(\phi ) \overset{\text {def}}{=}{\mathcal {Z}}^{-1} \exp \Big ( - \frac{1}{4} \int _{{\mathbb {T}}} \phi ^4(x) \,{\mathrm {d}}x\Big ) {{\mathrm {d}}}{\mathscr {g}}_1(\phi ). \end{aligned}$$
(1.8)

Using either Sobolev embedding or Khintchine’s inequality, we obtain \({\mathscr {g}}_1\)-almost surely that \(0< \Vert \phi \Vert _{L^4({\mathbb {T}})}< \infty \). This implies that the density \({\mathrm {d}} \Phi ^4_1 / {\mathrm {d}}{\mathscr {g}}_1\) is well-defined, almost surely positive, and lies in \(L^q({\mathscr {g}}_1)\) for all \(1\le q \le \infty \). In particular, the \(\Phi ^4_1\)-model is absolutely continuous with respect to the Gaussian free field \({\mathscr {g}}_1\). We emphasize that the potential energy in (1.8) does not require a renormalization. Furthermore, we can define truncated \(\Phi ^4_1\)-models by

$$\begin{aligned} {\mathrm {d}} \Phi ^4_{1;N}(\phi ) \overset{\text {def}}{=}{\mathcal {Z}}^{-1}_N \exp \Big ( - \frac{1}{4} \int _{{\mathbb {T}}} (P_{\le N} \phi )^4(x) \,{\mathrm {d}}x\Big ) {\mathrm {d}}{\mathscr {g}}_1(\phi ), \end{aligned}$$

where N is a dyadic integer and \(P_{\le N}\) a Littlewood-Paley projection. As was shown in [7], direct estimates yield the convergence of \({\mathrm {d}} \Phi ^4_{1;N}/ {\mathrm {d}}{\mathscr {g}}_1\) in \(L^q({\mathscr {g}}_1)\) for all \(1\le q < \infty \) and hence \(\Phi ^4_{1;N}\) converges to \(\Phi ^4_1\) in total variation as N tends to infinity.

In two spatial dimensions, however, we encounter a new difficulty. Since \({\mathscr {g}}_1\)-almost surely \(\Vert \phi \Vert _{L^2}=\infty \), the potential energy \(\Vert \phi \Vert _{L^4}^4\) is almost surely infinite. As a result, the potential energy requires a renormalization. A direct calculation using the definition of \(P_{\le N}\) in (1.14) below yields

$$\begin{aligned} \sigma _N^2 = \int _{0}^\infty {\mathrm {d}} {\mathscr {g}}_2(\phi ) \Vert P_{\le N} \phi \Vert _{L^2({\mathbb {T}}^2)}^2 \sim \log (N). \end{aligned}$$

We then replace the monomial \((P_{\le N} \phi )^4\) by the Hermite polynomial

$$\begin{aligned} :(P_{\le N} \phi )^4 := (P_{\le N} \phi )^4 - 6 \sigma _N^2 (P_{\le N} \phi )^2 + 3 \sigma _N^4. \end{aligned}$$

This leads to the truncated \(\Phi ^4_2\)-model given by

$$\begin{aligned} {\mathrm {d}} \Phi ^4_{2;N}(\phi ) \overset{\text {def}}{=}{\mathcal {Z}}^{-1}_N \exp \Big ( - \frac{1}{4} \int _{{\mathbb {T}}^2} :(P_{\le N} \phi )^4 :(x) \,{\mathrm {d}}x\Big ) {\mathrm {d}}{\mathscr {g}}_2(\phi ). \end{aligned}$$

After this renormalization, one can show (cf. [36]) that the densities \({\mathrm {d}} \Phi ^4_{2;N}/{\mathrm {d}}{\mathscr {g}}_2\) converge in \(L^q({\mathscr {g}}_2)\) for all \(1\le q < \infty \) and we can define \(\Phi ^4_2\) as the limit (in total-variation) of \(\Phi ^4_{2;N}\) as \(N\rightarrow \infty \). As in one spatial dimension, the \(\Phi ^4_2\)-model is absolutely continuous with respect to the Gaussian free field \({\mathscr {g}}_2\). Using similar tools as for the \(\Phi ^4_2\)-model, Bourgain [9] constructed the Gibbs measure \(\mu \) for the Hamiltonian with a Hartree interaction for \(\beta >2\), which corresponds to a relatively smooth interaction potential V. The key point of this paragraph is that the \(\Phi ^4_1\)-model, the \(\Phi ^4_2\)-model, and the Gibbs measure \(\mu \) for a smooth interaction potential can be constructed through “hard” analysis. As a result, one obtains strong modes of convergence and absolute continuity with respect to Gaussian free field.

The construction of the \(\Phi ^4_3\)-model, however, is much more complicated. As will be described below, several of the “hard” conclusions, such as convergence in total-variation or absolute continuity with respect to the Gaussian free field, are either unavailable or fail. As a result, we have to (partially) replace hard estimates by softer compactness arguments. We now give a short overview of the variational approach in [5, 6], which forms the basis of this paper.

In order to use techniques from stochastic control theory, we introduce a family of Gaussian processes \((W_t(x))_{t\ge 0}\) on an ambient probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) satisfying \({{\,\mathrm{Law}\,}}_{{\mathbb {P}}}(W_\infty ) = {\mathscr {g}}_3\), which will be defined in Sect. 2.1. We view t as a stochastic time-variable which serves as a regularization parameter. Using this terminology, we obtain a truncated \(\Phi ^4_3\)-model by setting

$$\begin{aligned} {\mathrm {d}} \Phi ^4_{3;T}(\phi ) = (W_\infty )_{\#} \big ( {\mathrm {d}} {\overline{\Phi }}^4_{3;T}(\phi ) \big ) \end{aligned}$$

and

$$\begin{aligned} {\mathrm {d}} {\overline{\Phi }}^4_{3;T}(\phi ) = {\mathcal {Z}}_T^{-1} \exp \big ( - \frac{1}{4} \int _{{\mathbb {T}}^3} W_T^4(x) - a_T W_T^2(x) - b_T \,{\mathrm {d}}x\big ) {\mathrm {d}}{\mathbb {P}}. \end{aligned}$$

We emphasize already that the \(\Phi ^4_{3;T}\)-measure does not correspond to a truncated Hamiltonian, which will be discussed in full detail in Sect. 2.1. In order to construct the \(\Phi ^4_3\)-model, the main step is to prove the tightness of the \(\Phi ^4_{3;T}\)-measures. Using Prokhorov’s theorem, this implies the weak convergence of a subsequence of \(\Phi ^4_{3;T}\) and we can define the \(\Phi ^4_3\)-measure as the weak limit. To prove tightness, Barashkov and Gubinelli obtain uniform bounds in T on the Laplace transform

$$\begin{aligned} f \in C( {\mathcal {C}}_x^{-\frac{1}{2}-}({\mathbb {T}}^3);{\mathbb {R}}) \rightarrow \int {\mathrm {d}} \Phi ^4_{3;T}(\phi ) ~ e^{-f(\phi )}. \end{aligned}$$

The main ingredients for the uniform bounds are the Boué-Dupuis formula (Theorem 2.1) and the para-controlled calculus of Gubinelli, Imkeller, and Perkowski [20], which has also been used in the stochastic quantization approach to the \(\Phi ^4_3\)-model (cf. [19]).

While the variational approach yields the existence of the \(\Phi ^4_3\)-measure, it only yields limited information regarding its properties. In spatial dimensions \(d=1,2\), the \(\Phi ^4_d\)-model is absolutely continuous with respect to the Gaussian free field \({\mathscr {g}}_d\), and hence the samples of \(\Phi ^4_3\) for many purposes behave like a random Fourier series with independent coefficients. This is an essential ingredient in almost all invariance arguments for random dispersive equations (see e.g. [8, 9, 13, 33]). Unfortunately, the \(\Phi ^4_3\)-measure is singular with respect to the Gaussian free field \({\mathscr {g}}_3\). This fact seems to be part of the folklore in mathematical physics, but it is surprisingly difficult to find a detailed reference. In an unpublished note available to the author [24], Martin Hairer proved the singularity using the stochastic quantization approach and regularity structures. Using the Girsanov-transformation, Barashkov and Gubinelli [6] constructed a reference measure \(\nu _3^4\) for the \(\Phi ^4_3\)-model, which serves a similar purpose as the Gaussian free field for \(\Phi ^4_1\) and \(\Phi ^4_2\). The samples of \(\nu ^4_3\) are given by an explicit Gaussian chaos and \(\Phi ^4_3\) is absolutely continuous with respect to \(\nu ^4_3\). Furthermore, Barashkov and Gubinelli proved that the reference measure \(\nu ^4_3\) and the Gaussian free field \({\mathscr {g}}_3\) are mutually singular, which yields a self-contained proof of the singularity of \(\Phi ^4_3\) with respect to the Gaussian free field \({\mathscr {g}}_3\).

2.1 Main results and methods

In the following, we simply write \({\mathscr {g}}={\mathscr {g}}_3\) for the three-dimensional Gaussian free field. Let \(N\ge 1\) be a dyadic integer and define the renormalized potential energy by

$$\begin{aligned} :{\mathcal {V}}_N^\lambda (\phi ):&\overset{\text {def}}{=}&\frac{\lambda }{4} \int _{{\mathbb {T}}^3} \Big ( ( V *\phi ^2) \phi ^2 - 2 a_N \phi ^2 - 4 ({\mathcal {M}}_N\phi ) \phi \nonumber \\&+ {\widehat{V}}(0) a_N^2 + 2 b_N \Big ) \,{\mathrm {d}}x+ c_N^\lambda . \end{aligned}$$
(1.9)

The coupling constant \(\lambda >0\) is introduced for illustrative purposes, but the reader may simply set \(\lambda =1\) as in all previous discussions. The renormalization constants \(a_N,b_N\), and \(c_N^\lambda \) are as in Definition 2.8 and Proposition 3.2 and the renormalization multiplier \({\mathcal {M}}_N\) is as in Definition 2.8. We emphasize that the renormalization in (1.9) goes beyond the usual Wick-ordering, which is only based on the mass \(\Vert P_{\le N} \phi \Vert _{L^2}^2\). The additional renormalization is contained in the renormalization constant \(c_N^\lambda \), which is related to the mutual singularity of \(\mu ^{\otimes }\) and \({\mathscr {g}}\) (for \(0<\beta <1/2\)). The truncated and renormalized Hamiltonian \(H_N\) is given by

$$\begin{aligned} H_N[\phi _0,\phi _1] \overset{\text {def}}{=}\frac{1}{2} \Big ( \Vert \phi _0 \Vert _{L^2}^2 + \Vert \nabla \phi _0 \Vert _{L^2}^2 + \Vert \phi _1 \Vert _{L^2}^2 \Big ) + :{\mathcal {V}}_N^\lambda (P_{\le N} \phi _0):~, \end{aligned}$$
(1.10)

where we omit the dependence on \(\lambda >0\) from our notation. We emphasize that only the quartic term contains a frequency-truncation and renormalization, whereas the quadratic terms remain unchanged. As described in the beginning of the introduction, we focus on the first factor of the truncated Gibbs measure \(\mu ^{\otimes _N}\), which is given by

$$\begin{aligned} \, {\mathrm {d}}\mu _N(\phi ) = \frac{1}{{\mathcal {Z}}_N^\lambda } \exp \Big ( - :{\mathcal {V}}_N^\lambda (P_{\le N} \phi ):\Big ) {\mathrm {d}}{\mathscr {g}}(\phi ). \end{aligned}$$
(1.11)

Before we state our main result, we recall the assumptions on the interaction potential \(V:{\mathbb {T}}^3 \rightarrow {\mathbb {R}}\) from the introduction to the series. In these assumptions, \(0<\beta <3\) is a parameter.

Assumptions A

We assume that the interaction potential V satisfies

  1. (1)

    \(V(x)= c_\beta |x|^{-(3-\beta )}\) for some \(c_\beta >0\) and all \(x\in {\mathbb {T}}^3\) satisfying \(\Vert x\Vert \le 1/10\),

  2. (2)

    \(V(x) > rsim _\beta 1\) for all \(x\in {\mathbb {T}}^3\),

  3. (3)

    \(V(x)=V(-x)\) for all \(x\in {\mathbb {T}}^3\),

  4. (4)

    V is smooth away from the origin.

We now state the conclusions of this paper which will be needed in the second part of this series [11]. A more comprehensive version of our results will then be stated in Theorems 1.31.4, and 1.5 below. The additional results may be useful in further applications, such as invariant measures for a Schrödinger equation with a Hartree nonlinearity.

Theorem 1.1

(The Gibbs measure). Let \(\kappa >0\) be a fixed positive parameter, let \(0<\beta <3\) be a parameter, and let the interaction potential V be as in the Assumptions A. Then, the sequence of truncated Gibbs measures \((\mu _N)_{N\ge 1}\) converges weakly to a probability measure \(\mu _\infty \) on \({\mathcal {C}}_x^{-1/2-\kappa }({\mathbb {T}}^3)\), which is called the Gibbs measure. If in addition \(0<\beta <1/2\), the Gibbs measure \(\mu _\infty \) and the Gaussian free field \({\mathscr {g}}\) are mutually singular. Furthermore, there exists a sequence of reference measures \((\nu _N)_{N\ge 1}\) on \({\mathcal {C}}_x^{-1/2-\kappa }({\mathbb {T}}^3)\) and an ambient probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) satisfying the following properties:

  1. (1)

    (Absolute continuity and \(L^q\)-bounds) The truncated Gibbs measures \(\mu _N\) are absolutely continuous with respect to the reference measures \(\nu _N\). More quantitatively, there exists a parameter \(q>1\) and a constant \(C\ge 1\), depending only on \(\beta \), such that

    $$\begin{aligned} \mu _N(A) \le C \nu _N(A)^{1-\frac{1}{q}} \end{aligned}$$

    for all Borel sets \(A \subseteq {\mathcal {C}}_x^{-1/2-\kappa }({\mathbb {T}}^3)\).

  2. (2)

    (Representation of \(\nu _N\)) Let \(\gamma =\min (1/2+\beta ,1)\). There exists a large integer \(k=k(\beta )\) and two random functions \({\mathscr {g}}, {\mathcal {R}}_N:(\Omega ,{\mathcal {F}}) \rightarrow {\mathcal {C}}_x^{-1/2-\kappa }({\mathbb {T}}^3)\) satisfying for all \(p\ge 2\) that

    $$\begin{aligned} \nu _N = {{\,\mathrm{Law}\,}}_{{\mathbb {P}}}\big ({\mathcal {G}}+ {\mathcal {R}}_N \big ), \quad {\mathscr {g}} = {{\,\mathrm{Law}\,}}_{{\mathbb {P}}} \big ( {\mathcal {G}}\big ) , \quad \text {and} \quad \Vert {\mathcal {R}}_N \Vert _{L^p_\omega {\mathcal {C}}_x^{\gamma -\kappa }(\Omega \times {\mathbb {T}}^3)} \le p^{\frac{k}{2}}. \end{aligned}$$

Remark 1.2

After the completion of (the first version of) this series, the author learned of independent work by Oh, Okamoto, and Tolomeo [35], which discusses the focusing and defocusing three-dimensional (stochastic) nonlinear wave equation with a Hartree nonlinearity. In the focusing case, the authors provide a complete picture of the construction and properties of the focusing Gibbs measures, which distinguishes the three regimes \(\beta >2\), \(\beta =2\), and \(\beta <2\) (cf. [35]). In the defocusing case, the authors construct the Gibbs measures for \(\beta >0\) and prove the singularity for \(0<\beta \le 1/2\), which includes the endpoint \(\beta =1/2\). The reference measures are briefly discussed in [35, Appendix C], but only play a minor role in their analysis. The \(L^q\)-bound in Theorem 1.1, which will be essential in the second part of this series [11], is not proven in [35].

In the first version of this manuscript, we proved the tightness of the truncated Gibbs measures \((\mu _N)_{N\ge 1}\) which only implies the convergence of a subsequence of \((\mu _N)_{N\ge 1}\). In [35], the authors proved the uniqueness of weak subsequential limits, which lead to the convergence of the full sequence. A version of the uniqueness argument from [35], which has been modified to match our notation, has now been included in “Appendix C”.

While the measure-theoretic part of [35] treats all \(\beta >0\), the dynamical results are restricted to \(\beta >1\). In particular, the singular regime \(0<\beta <1/2\) is not covered, which is the main object of this series.

In addition to the singular regime \(0<\beta <1/2\), the most interesting cases in Theorem 1.1 are the Newtonian potential \(|x|^{-2}\) (corresponding to \(\beta =1\)) and the Coulomb potential \(|x|^{-1}\) (corresponding to \(\beta =2\)). As mentioned earlier in the introduction, Bourgain [9] proved a version of Theorem 1.1 in the limited range \(\beta >2\), which corresponds to a relatively smooth interaction potential.

We now split the main theorem (Theorem 1.1) into three parts:

  • the tightness and weak convergence of the truncated Gibbs measures \(\mu _N\),

  • the construction and properties of the reference measures \(\nu _N\),

  • the mutual singularity of the Gibbs measure and the Gaussian free field.

Theorem 1.3

(Tightness and convergence). The truncated Gibbs measures \((\mu _N)_{N\ge 1}\) are tight on \({\mathcal {C}}_x^{-1/2-\kappa }({\mathbb {T}}^3)\). Furthermore, the sequence \((\mu _N)_{N\ge 1}\) weakly converges to a limiting measure \(\mu _\infty \).

The overall strategy of the proof of Theorem 1.3 is the same as in the variational approach of Barashkov and Gubinelli [5]. In comparison with [5], the terms in this paper often have a more complicated algebraic structure but obey better analytical estimates. As any reader familiar with regularity structures or para-controlled calculus may certify, the algebraic structure of most stochastic objects is already quite complicated, so this trade-off is not always favorable. In addition, the non-locality of the nonlinearity requires different analytical estimates and we mention the two most important examples:

  1. (i)

    The coercive term \(\Vert f\Vert _{L^4}^4\) in the variational problem for the \(\Phi ^4_3\)-model is replaced by the potential energy

    $$\begin{aligned} \int _{{\mathbb {T}}^3} (V *f^2) f^2 \,{\mathrm {d}}x. \end{aligned}$$

    We emphasize that the coercive term in the variational problem does not contain a renormalization, which is a result of the binomial formula in Lemma 2.11. In order to use the potential energy in our estimates, we rely on a fractional derivative estimate of Visan [41, (5.17)].

  2. (ii)

    In the variational problem, we encounter mixed terms of the form

    $$\begin{aligned} \int _{{\mathbb {T}}^3} \Big [ \big ( V *( P_{\le N} W_\infty \cdot P_{\le N} f_1 ) \cdot P_{\le N} W_\infty \cdot P_{\le N} f_2 - \big ( {\mathcal {M}}_NP_{\le N} f_1 \big ) P_{\le N} f_2 \Big ] \,{\mathrm {d}}x, \end{aligned}$$

    where \((W_t)_{t\ge 0}\) is the Gaussian process from the introduction. Based on the literature on random dispersive equations [8, 9, 13, 14, 22], it is tempting to bound this mixed term through Fourier-analytic and random matrix techniques. We instead develop a simpler and elegant physical-space approach.

The next theorem gives a more detailed description of the reference measures in Theorem 1.1. To simplify the notation, we allow the truncation parameter N to take the value \(\infty \).

Theorem 1.4

(Reference measures). There exists a family of reference measures \((\nu _N)_{1 \le N \le \infty }\) and an ambient probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) satisfying the following properties:

  1. (1)

    Absolute continuity and \(L^q\)-bounds: The truncated Gibbs measures \(\mu _N\) are absolutely continuous with respect to the reference measures \(\nu _N\). More quantitatively, there exists a parameter \(q>1\) and a constant \(C\ge 1\), depending only on \(\beta \), such that

    $$\begin{aligned} \mu _N(A) \le C \nu _N(A)^{1-\frac{1}{q}} \end{aligned}$$

    for all Borel sets \(A \subseteq {\mathcal {C}}_x^{-1/2-\kappa }({\mathbb {T}}^3)\).

  2. (2)

    Representation of \(\nu _N\): We have that

    $$\begin{aligned} \nu _N = {{\,\mathrm{Law}\,}}_{{\mathbb {P}}} \big ( {\mathcal {G}}^{(1)} + {\mathcal {G}}^{(3)}_N + {\mathcal {G}}^{(n)}_N \big ). \end{aligned}$$

    Here, \(n=n(\beta )\) is a large integer and the linear, cubic, and n-th order Gaussian chaoses are explicitly given by

    $$\begin{aligned} {\mathcal {G}}^{(1)}&= W_\infty , \\ {\mathcal {G}}^{(3)}_N&= -\lambda P_{\le N} \int _0^\infty J_s^2 \Big ( :( V *( P_{\le N} W_s )^2 ) P_{\le N} W_s :\Big ) \, {\mathrm {d}}s, \\ {\mathcal {G}}^{(n)}_N&= P_{\le N} \int _0^\infty \langle \nabla \rangle ^{-\frac{1}{2}} J_s^2 \Big ( :( \langle \nabla \rangle ^{-\frac{1}{2}} P_{\le N} W_s )^n :\Big ) \, {\mathrm {d}}s, \end{aligned}$$

    where we refer the reader to Sect. 2.1 and Definition 2.6 for the definitions of \(J_s\) and the renormalizations.

We emphasize that the representation of \(\nu _N\) in Theorem 1.4 is much more detailed than stated in Theorem 1.1. This additional information is not required in our proof of global well-posedness and invariance in the second part of the series. However, we believe that the more detailed representation way be relevant for the Schrödinger equation with a Hartree nonlinearity. The reason lies in low\(\times \)low\(\times \)high-interactions, which are more difficult in Schrödinger equations than in wave equations. In the last two years, we have seen new and intricate methods dealing with these interactions [10, 13, 14], but all of these papers heavily rely on the independence of the Fourier coefficients. In fact, overcoming this obstruction is mentioned as an open problem in [14, Section 9.1].

The proof of Theorem 1.4 is based on the Girsanov-approach of Barashkov and Gubinelli [6]. As mentioned earlier, however, we cannot use the same approximate Gibbs measures as in [6], since they do not correspond to a frequency-truncated Hamiltonian. In the second part of the series, the frequency-truncated Hamiltonians are an essential ingredient in the proof of global well-posedness and invariance. This difference will be discussed in detail in Sect. 2.1. For now, we simply mention that there is a trade-off between desirable properties from a PDE or probabilistic perspective.

Our last theorem describes the relationship between the Gibbs measure \(\mu _\infty \) and the Gaussian free field \({\mathscr {g}}\).

Theorem 1.5

(Singularity). If \( 0< \beta < 1/2 \), then the Gibbs measure \( \mu _\infty \) and the Gaussian free field \( {\mathscr {g}} \) are mutually singular. If \( \beta > 1/2 \), then the Gibbs measure is absolutely continuous with respect to the Gaussian free field \( {\mathscr {g}} \).

Theorem 1.5 determines the exact threshold between absolute continuity and singularity of \(\mu _\infty \) with respect to \({\mathscr {g}}\). As mentioned in Remark 1.2, the singularity at the endpoint \(\beta =1/2\) has been obtained in independent work by Oh, Okamoto, and Tolomeo [35]. The absolute continuity for \(\beta >1/2\) already follows from the variational estimates in our construction of \(\mu _\infty \). The main step is the mutual singularity of \(\mu _\infty \) and \({\mathscr {g}}\) for \(0<\beta <1/2\). We provide an explicit event witnessing this singularity, which is based on the behaviour of the frequency-truncated potential energy

$$\begin{aligned} \int _{{\mathbb {T}}^3} :(V *(P_{\le N} \phi )^2 ) (P_{\le N} \phi )^2 :\,{\mathrm {d}}x\end{aligned}$$

under the different measures.

2.2 Overview

To orient the reader, let us review the rest of this paper. In Sect. 2.1, we introduce the stochastic control perspective and recall the Boué–Dupuis formula. In Sect. 2.2, we estimate several stochastic objects, such as the renormalized nonlinearity \(:(V*W_t^2) W_t:\). Our main tools will be Itô’s formula and Gaussian hypercontractivity. In Sect. 3, we prove the tightness of the truncated Gibbs measures \(\mu _N\) and construct the limiting measure \(\mu _\infty \). Using the Laplace transform and the Boué-Dupuis formula, the proof of tightness reduces to estimates for a variational problem, which occupy most of this section. In Sect. 4, we first construct the reference measures \(\nu _N\) and then examine their properties. The main ingredients are Girsanov’s transformation and our earlier variational estimates. Finally, in Sect. 5, we prove the singularity of the Gibbs measure \( \mu _\infty \) with respect to the Gaussian free field \( {\mathscr {g}} \) for all \( 0<\beta < 1/2 \).

2.3 Notation

In the rest of the paper, we use \( \overset{\text {def}}{=}\) instead of \( := \) for definitions. The reason is that the colon in \( := \) may be confused with our notation for renormalized powers in Definition 2.6 below. With a slight abuse of notation, we write \( \,{\mathrm {d}}x\) for the normalized Lebesgue measure on \( {\mathbb {T}}^3 \). That is, we implicitly normalize

$$\begin{aligned} \int _{{\mathbb {T}}^3} 1 \,{\mathrm {d}}x= 1. \end{aligned}$$

We define the Fourier transform of a function \( f :{\mathbb {T}}^3 \rightarrow {\mathbb {C}} \) by

$$\begin{aligned} {\widehat{f}}(n) \overset{\text {def}}{=}\int _{{\mathbb {T}}^3} f(x) e^{-inx} \,{\mathrm {d}}x. \end{aligned}$$

For any \( k \in {\mathbb {N}} \) and \( n_1,\ldots ,n_k \in {\mathbb {Z}}^3 \), we define

$$\begin{aligned} n_{12\ldots k} \overset{\text {def}}{=}\sum _{j=1}^k n_j. \end{aligned}$$
(1.12)

For instance, \( n_{12}= n_1 +n_2 \) and \( n_{123} = n_1 + n_2 + n_3 \).

We now introduce our frequency-truncation operators. We let \( \rho :{\mathbb {R}}_{>0} \rightarrow [0,1] \) be a smooth, non-increasing function satisfying \( \rho (y) = 1 \) for all \( 0\le y \le 1/4 \) and \( \rho (y) = 0 \) for all \( y \ge 4 \). We also assume that \( \min ( \rho (y), - \rho ^\prime (y)) > rsim 1 \) for all \( 1/2 \le y \le 2 \). For any \( t \ge 0 \) and \( n \in {\mathbb {Z}}^3 \), we also define

$$\begin{aligned} \rho _t(n) \overset{\text {def}}{=}\rho \Big ( \frac{\Vert n\Vert _2}{\langle t \rangle }\Big ). \end{aligned}$$

In particular, it holds that \( t \mapsto \rho _t(\xi ) \) is non-decreasing. In order to break up the frequency truncation, we also set

$$\begin{aligned} \sigma _t(n) \overset{\text {def}}{=}\Big ( \frac{{\mathrm {d}}}{{\mathrm {d}}t}\rho _t(n) \Big )^{\frac{1}{2}}. \end{aligned}$$
(1.13)

This continuous approach instead of the usual discrete decomposition will be essential in the stochastic control approach (Sect. 2.1). Nevertheless, we will sometimes use the usual dyadic Littlewood-Paley operators. For any dyadic \( N \ge 1 \), we define \( P_{\le N} \) by

$$\begin{aligned} \widehat{P_{\le N}f}(n) = \rho _N(n) {\widehat{f}}(n). \end{aligned}$$
(1.14)

We further set

$$\begin{aligned} P_{1} f= P_{\le 1} f \qquad \text {and} \qquad P_N f = P_{\le N} f - P_{\le N/2} f \quad \text {for all } N \ge 2. \end{aligned}$$

The corresponding Fourier multipliers are denoted by

$$\begin{aligned} \chi _1(n)= \rho _1(n) \qquad \text {and} \qquad \chi _N(n) = \rho _{N}(n) - \rho _{N/2}(n) \quad \text {for all } N \ge 2. \end{aligned}$$
(1.15)

For any \( s \in {\mathbb {R}}\), the \( {\mathcal {C}}_x^s({\mathbb {T}}^3) \)-norm is defined as

$$\begin{aligned} \Vert f \Vert _{{\mathcal {C}}_x^s({\mathbb {T}}^3)} \overset{\text {def}}{=}\sup _{N\ge 1} N^s \Vert P_N f \Vert _{L^\infty _x({\mathbb {T}}^3)}. \end{aligned}$$
(1.16)

We then define the corresponding space \( {\mathcal {C}}_x^s({\mathbb {T}}^3) \) by

$$\begin{aligned} {\mathcal {C}}_x^s({\mathbb {T}}^3) \overset{\text {def}}{=}\big \{ f :{\mathbb {T}}^3 \rightarrow {\mathbb {R}}|~ \Vert f \Vert _{{\mathcal {C}}_x^s} < \infty , \lim _{N\rightarrow \infty } N^s \Vert P_N f \Vert _{L^\infty _x({\mathbb {T}}^3)} = 0 \big \}. \end{aligned}$$
(1.17)

Due to the additional constraint as \( N \rightarrow \infty \), the space \( {\mathcal {C}}_x^s({\mathbb {T}}^3) \) is separable. This allows us to later use Prokhorov’s theorem for families of measures on \( {\mathcal {C}}_x^s({\mathbb {T}}^3) \). We also define

$$\begin{aligned} \begin{aligned}&{\mathcal {C}}_t^0 {\mathcal {C}}_x^s([0,\infty ]\times {\mathbb {T}}^3) \\&\overset{\text {def}}{=}\big \{ f :[0,\infty ) \times {\mathbb {T}}^3 \rightarrow {\mathbb {R}}|~ \sup _{t\ge 0} \Vert f(t,\cdot ) \Vert _{{\mathcal {C}}^s_x({\mathbb {T}}^3)} < \infty , \lim _{t\rightarrow \infty } f(t,\cdot ) \text { exists in } {\mathcal {C}}_x^s({\mathbb {T}}^3) \big \}.\nonumber \\ \end{aligned} \end{aligned}$$
(1.18)

Similar as above, the additional restriction as \( t\rightarrow \infty \) makes \( {\mathcal {C}}_t^0 {\mathcal {C}}_x^s([0,\infty ]\times {\mathbb {T}}^3) \) separable.

As a measure of tightness in \({\mathcal {C}}_t^0 {\mathcal {C}}_x^s([0,\infty ]\times {\mathbb {T}}^3) \), we define for any \( 0< \alpha < 1 \) and \( \eta >0 \) the norm

$$\begin{aligned}&\Vert f \Vert _{{\mathcal {C}}_t^{\alpha ,\eta }{\mathcal {C}}_x^s([0,\infty ]\times {\mathbb {T}}^3)} \overset{\text {def}}{=}\Vert f(0)\Vert _{{\mathcal {C}}_x^s({\mathbb {T}}^3)} \nonumber \\&\qquad + \sup _{0\le t,t^\prime \le \infty } \bigg ( \min ( \langle t\rangle , \langle t^\prime \rangle )^\eta \frac{\Vert f(t) - f(t^\prime )\Vert _{{\mathcal {C}}_x^s({\mathbb {T}}^3)}}{1 \wedge |t-t^\prime |^\alpha } \bigg ). \end{aligned}$$
(2.1)

For \( 1 \le r \le \infty \), we also define the Sobolev space \( {\mathbb {W}}^{s,r}_x({\mathbb {T}}^3) \) as the completion of \( C^\infty _x({\mathbb {T}}^3) \) with respect to

$$\begin{aligned} \Vert f \Vert _{{\mathbb {W}}_x^{s,r}} = \Vert N^s P_N f \Vert _{\ell _N^r L_x^r}. \end{aligned}$$

We hope that the subscript x prevents any confusion with the stochastic objects in Sect. 2.2.

3 Stochastic objects

In this section, we introduce the stochastic control framework and describe several stochastic objects. While the reader with a background in singular SPDE and advanced stochastic calculus can think of this section as standard, much of this section may be new to a reader with a primary background in dispersive PDE. As a result, we include full details for most standard arguments but encourage the expert to skip the proofs.

3.1 Stochastic control perspective

We let \( (B_t^n)_{n\in {\mathbb {Z}}^3\backslash \{0\}} \) be a sequence of standard complex Brownian motions such that \( B_t^{-n} = \overline{B_t^n} \) and \( B_t^n, B_t^m \) are independent for \( n \ne \pm m \). We let \( B_t^0 \) be a standard real-valued Brownian motion independent of \( (B_t^n)_{n\in {\mathbb {Z}}^3\backslash \{0\}} \). Furthermore, we let \( B_t(\cdot ) \) be the Gaussian process with Fourier coefficients \( (B_t^n)_{n \in {\mathbb {Z}}^3} \), i.e.,

$$\begin{aligned} B_t(x) \overset{\text {def}}{=}\sum _{n\in {\mathbb {Z}}^3} e^{i\langle n,x \rangle } B_t^n. \end{aligned}$$
(2.2)

For every \( t\ge 0 \), the Gaussian process formally satisfies \( {\mathbb {E}}[ B_t(x)B_t(y) ] = t \cdot \delta (x-y) \) and hence \( B_t(\cdot )\) is a scalar multiple of spatial white noise. We also let \( ({\mathcal {F}}_t)_{t\ge 0} \) be the filtration corresponding to the family of Gaussian processes \( (B_t^n)_{t\ge 0} \). For future use, we denote the ambient probability space by \( (\Omega ,{\mathcal {F}},{\mathbb {P}}) \).

The Gaussian free field \( {\mathscr {g}} \), however, has covariance \( (1-\Delta )^{-1} \). To this end, we now introduce the Gaussian process \( W_t(x) \). For \( \sigma _t(x) \) as in (1.13) and any \( n \in {\mathbb {Z}}^3 \), we define

$$\begin{aligned} W_t^{n} \overset{\text {def}}{=}\int _0^t \frac{\sigma _s(n)}{\langle n\rangle } \, {\mathrm {d}}B_s^n ~. \end{aligned}$$
(2.3)

We note that \( W_t^n \) is a complex Gaussian random variable with variance \( \rho _t^2(n)/\langle n \rangle ^2\). We finally set

$$\begin{aligned} W_t(x) \overset{\text {def}}{=}\sum _{n\in {\mathbb {Z}}^3} e^{i \langle n , x \rangle } W_t^n. \end{aligned}$$
(2.4)

It is easy to see for any \( \kappa >0 \) that \( W \in {\mathcal {C}}_t^0 {\mathcal {C}}_x^{-1/2-\kappa }([0,\infty ]\times {\mathbb {T}}^3) \) almost surely. With a slight abuse of notation, we write \( {\mathrm {d}}{\mathbb {P}}(W) \) for the integration with respect to the law of W under \( {\mathbb {P}}\), i.e., we omit the pushforward by W, and we write W for the canonical process on \( {\mathcal {C}}_t^0 {\mathcal {C}}_x^{-1/2-\kappa }([0,\infty ]\times {\mathbb {T}}^3) \). Comparing \( W_t \) and \( B_t \), we have changed the covariance from \( t{\text {Id}}\) to \( \rho _t(\nabla )^2 (I-\Delta )^{-1} \). For any fixed \( T \ge 0 \), we have that

$$\begin{aligned} {{\,\mathrm{Law}\,}}_{{\mathbb {P}}}(W_T) = {{\,\mathrm{Law}\,}}_{{\mathbb {P}}}(\rho _T(\nabla )W_\infty ). \end{aligned}$$
(2.5)

We already emphasize, however, that the processes \( t \mapsto W_t \) and \( t \mapsto \rho _t(\nabla ) W_\infty \) have different laws, since only the first process has independent increments. This difference will be important in the definition of \( {\widetilde{\mu }}_{T} \) below. To simplify the notation, we also introduce the Fourier multiplier \( J_t \), which is defined by

$$\begin{aligned} \widehat{J_t f}(n) \overset{\text {def}}{=}\frac{\sigma _t(n)}{\langle n\rangle } {\widehat{f}}(n), \end{aligned}$$
(2.6)

Using this notation, we can represent the Gaussian process \( W_t \) through the stochastic integral

$$\begin{aligned} W_t = \int _0^t J_s \, {\mathrm {d}}B_s. \end{aligned}$$

In a similar spirit, we define for any \( u:[0,\infty )\times {\mathbb {T}}^3 \rightarrow {\mathbb {R}}\) the integral \( I_t[u] \) by

$$\begin{aligned} I_t[u] \overset{\text {def}}{=}\int _0^t J_s u_s \, {\mathrm {d}}s. \end{aligned}$$
(2.7)

We now recall the Boué-Dupuis formula [3], where our formulation closely follows [5, 6]. We let \( {\mathbb {H}}_a\) be the space of \( {\mathcal {F}}_t\)-progressively measurable processes \( u :\Omega \times [0,\infty ) \times {\mathbb {T}}^3 \rightarrow {\mathbb {R}}\) which \( {\mathbb {P}}\)-almost surely belong to \( L_{t,x}^2([0,\infty )\times {\mathbb {T}}^3) \).

Theorem 2.1

(Boué-Dupuis formula). Let \( 0< T <\infty \), let \( F: C_t([0,T],C_x^\infty ({\mathbb {T}}^3)) \rightarrow {\mathbb {R}}\) be a Borel measurable function, and let \( 1<p,q<\infty \). Assume that

$$\begin{aligned} \frac{1}{p}+\frac{1}{q}=1, \quad {\mathbb {E}}_{{\mathbb {P}}} \big [ |F(W)|^p \big ]< \infty , \quad \text {and} \quad {\mathbb {E}}_{{\mathbb {P}}} \big [ e^{-q F(W)}\big ] < \infty , \end{aligned}$$
(2.8)

where we regard W as an element of \( C_t([0,T],C_x^\infty ({\mathbb {T}}^3)) \). Then,

$$\begin{aligned} - \log {\mathbb {E}}_{{\mathbb {P}}} \Big [ e^{-F(W)}\Big ] = \inf _{u\in {\mathbb {H}}_a} {\mathbb {E}}_{{\mathbb {P}}} \Big [ F(W+I(u)) + \frac{1}{2} \int _0^T \Vert u_s\Vert _{L^2({\mathbb {T}}^3)}^2 \, {\mathrm {d}}s\Big ]. \end{aligned}$$
(2.9)

Remark 2.2

The optimization problem in (2.8) and, more generally, the change of perspective from \( W_\infty \) to the whole process \( t\mapsto W_t \), is reminiscent of stochastic control theory.

Due to the frequency projection in the definition of \( J_t \), we have that \( W_t, I_t[u] \in C_t([0,T],C^\infty _x({\mathbb {T}}^3)) \). In our arguments below, the smoothness can be used to verify (2.7) through soft methods. Of course, a soft method cannot yield uniform bounds in T, which are one of the main goals of this section.

In the introduction, we discussed the Gibbs measure \( \mu _N \) corresponding to the truncated dynamics induced by \(H_N\), which has been defined in (1.10). In the spirit of the stochastic control approach, we now change our notation and use the parameter T to denote the truncation. Since the law of \( W_\infty \) under \( {\mathbb {P}}\) is the same as the Gaussian free field \( {\mathscr {g}} \) and \(P_{\le T} = \rho _T(\nabla )\), we obtain that

$$\begin{aligned} \, {\mathrm {d}}\mu _T(\phi ) = \frac{1}{{\mathcal {Z}}^{\scriptscriptstyle {T},\lambda }} \exp \Big ( - :{\mathcal {V}}^{\scriptscriptstyle {T},\lambda }(\rho _T(\nabla )\phi ):\Big ) \, {\mathrm {d}}\big ( (W_\infty )_\# {\mathbb {P}}\big )(\phi ). \end{aligned}$$
(2.10)

The renormalized potential energy \( {\mathcal {V}}^{\scriptscriptstyle {T},\lambda }\) is as in (3.2). We view \( \mu _T\) as a measure on the space \( {\mathcal {C}}_x^{-1/2-\kappa }({\mathbb {T}}^3) \) for any fixed \( \kappa >0 \). In order to utilize the Boué-Dupuis formula, we lift \( \mu _T\) to a measure on \( {\mathcal {C}}_t^0 {\mathcal {C}}_x^{-1/2-\kappa }([0,\infty ] \times {\mathbb {T}}^3) \).

Definition 2.3

We define the measure \( {\widetilde{\mu }}_{T}\) on \( {\mathcal {C}}_t^0 {\mathcal {C}}_x^{-1/2-\kappa }([0,\infty ]\times {\mathbb {T}}^3) \) by

$$\begin{aligned} {\mathrm {d}}{\widetilde{\mu }}_T(W) \overset{\text {def}}{=}\frac{1}{{\mathcal {Z}}^{\scriptscriptstyle {T},\lambda }} \exp \big ( - :{\mathcal {V}}^{\scriptscriptstyle {T},\lambda }( \rho _T(\nabla ) W_\infty ) :\big ) \, {\mathrm {d}}{\mathbb {P}}(W). \end{aligned}$$
(2.11)

The content of the next lemma explains the relationship between \( {\widetilde{\mu }}_T \) and \( \mu _T\).

Lemma 2.4

The Gibbs measure \( \mu _T\) is the pushforward of \( {\widetilde{\mu }}_T \) under \( W_\infty \), i.e.,

$$\begin{aligned} \mu _T= (W_\infty )_\# {\widetilde{\mu }}_T . \end{aligned}$$
(2.12)

Due to its central importance to the rest of the paper, we prove this basic identity.

Proof

For any measurable function \( f :{\mathcal {C}}_x^{-\frac{1}{2}-\kappa }({\mathbb {T}}^3) \rightarrow {\mathbb {R}}\), we have that

$$\begin{aligned} \int f(\phi ) {\mathrm {d}}\mu _T(\phi )&= \frac{1}{{\mathcal {Z}}^{\scriptscriptstyle {T},\lambda }} \int f(\phi ) \exp ( - :{\mathcal {V}}^{\scriptscriptstyle {T},\lambda }(\rho _T(\nabla ) \phi ) :) {\mathrm {d}}\big ( (W_\infty )_\# {\mathbb {P}}\big ) (\phi ) \\&= \frac{1}{{\mathcal {Z}}^{\scriptscriptstyle {T},\lambda }} \int f(W_\infty ) \exp ( - :{\mathcal {V}}^{\scriptscriptstyle {T},\lambda }(\rho _T(\nabla ) W_\infty ) :) {\mathrm {d}}{\mathbb {P}}(W) \\&= \int f(W_\infty ) {\mathrm {d}}{\widetilde{\mu }}_T(W) \\&= \int f(\phi ) {\mathrm {d}} \big ( (W_\infty )_\# {\widetilde{\mu }}_T\big )(\phi ). \end{aligned}$$

This proves the desired identity (2.4). \(\square \)

In [5, 6], Barashkov and Gubinelli work with the lifted measure

$$\begin{aligned} {\mathrm {d}}\bar{\mu }_T(W) = \frac{1}{{\mathcal {Z}}^{\scriptscriptstyle {T},\lambda }} \exp \big ( - :{\mathcal {V}}^{\scriptscriptstyle {T},\lambda }( W_T) :\big ) \, {\mathrm {d}}{\mathbb {P}}(W). \end{aligned}$$
(2.13)

While \( W_T \) and \( \rho _T(\nabla ) W_\infty \) have the same distribution, the measures \( {\widetilde{\mu }}_T \) and \( \bar{\mu }_T \) do not coincide. Since this is an important difference between this paper and the earlier works [5, 6], let us explain our motivation for working with \( {\widetilde{\mu }}_T \) instead of \( \bar{\mu }_T \). From a probabilistic stand-point, the measure \( \bar{\mu }_T \) has better properties than \( {\widetilde{\mu }}_T \). This is related to the independent increments of the process \( t \mapsto W_t \) and we provide further comments in Remark 4.8 below. From a PDE perspective, however, \( \bar{\mu }_T \) behaves much worse than \( {\widetilde{\mu }}_T \). For the proof of global well-posedness and invariance in the second part of this series, it is essential that \( \mu _T= (W_\infty )_\# {\widetilde{\mu }}_T \) is invariant under the Hamiltonian flow of (1.10). In contrast, the author is not aware of an explicit expression for the pushforward of \( \bar{\mu }_T \) under \( W_\infty \). In particular, \( (W_\infty )_\# \bar{\mu }_T \) is not directly related to \( \mu _T\) and not necessarily invariant under the Hamiltonian flow of \(H_N\). Alternatively, we could work with the pushforward of \( \bar{\mu }_T \) under \( W_T \). A similar calculation as in the proof of Lemma 2.4 shows that \( (W_T)_\# \bar{\mu }_T = (\rho _T(\nabla ))_\# \mu _T\). Unfortunately, \( (\rho _T(\nabla ))_\# \mu _T\) also does not seem to be invariant under a truncation of the nonlinear wave equation. To summarize, while the measure \( \bar{\mu }_T \) has useful probabilistic properties, it lacks a direct relationship to the truncated dynamics and is ill-suited for our globalization and invariance arguments.

Since we rely on \( \rho _T(\nabla ) W_\infty \) in the definition of \( {\widetilde{\mu }}_T \), the Gaussian process \( t \mapsto \rho _T(\nabla ) W_t\) will play an important role in the rest of this paper. As a result, we now deal with both values T and t simultaneously. In most arguments, T will remain fixed while we use Itô’s formula and martingale properties in t. To simplify the notation, we now write

(2.14)

Since this will be convenient below, we also define

$$\begin{aligned} \rho ^T_t(n) \overset{\text {def}}{=}\rho _T(n) \cdot \rho _t(n), \qquad \sigma _t^T(n) \overset{\text {def}}{=}\rho _T(n) \sigma _t(n),\qquad \text {and} \qquad J_t^{\scriptscriptstyle {T}}\overset{\text {def}}{=}\rho _T(\nabla ) J_t.\nonumber \\ \end{aligned}$$
(2.15)

Furthermore, we define the integral operator \( I_t^{\scriptscriptstyle {T}}\) by

$$\begin{aligned} I_t^{\scriptscriptstyle {T}}[u] = \rho _T(\nabla ) I_t[u] = \int _0^t J_s^{\scriptscriptstyle {T}}u_s \, {\mathrm {d}}s. \end{aligned}$$
(2.16)

3.2 Stochastic objects and renormalization

We now proceed with the construction and renormalization of several stochastic objects. Similar constructions are standard in the probability theory literature and a comprehensive and well-written introduction can be found in [23, 30, 36]. In order to make this section accessible to readers with a primary background in dispersive PDEs, however, we include full details. In a similar spirit, we follow a hands-on approach and mainly rely on Itô calculus. In Lemma 2.20, however, this approach becomes computationally infeasible and we also use multiple stochastic integrals (see [34] or Sect. A.2).

Lemma 2.5

Let \( S_N \) be the symmetric group on \( \{ 1,\ldots ,N\} \) and let be as in (2.13). Then, we have for all \( n_1,n_2,n_3,n_4 \in {\mathbb {Z}}^3 \) that

$$\begin{aligned} W_{t}^{{\scriptscriptstyle {T}},n_1}&= \int _0^t \, {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_1} \end{aligned}$$
(2.17)
$$\begin{aligned} W_{t}^{{\scriptscriptstyle {T}},n_1} W_{t}^{{\scriptscriptstyle {T}},n_2}&= \sum _{\pi \in S_2} \int _0^t \int _0^{t_1} \, {\mathrm {d}}W_{t_2}^{{\scriptscriptstyle {T}},n_{\pi (2)}} {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_{\pi (1)}} + \delta _{n_1+n_2=0} \frac{\rho ^{\scriptscriptstyle {T}}_{t}(n_1)^2}{\langle n_1\rangle ^2}, \end{aligned}$$
(2.18)
$$\begin{aligned} W_{t}^{{\scriptscriptstyle {T}},n_1} W_{t}^{{\scriptscriptstyle {T}},n_2} W_{t}^{{\scriptscriptstyle {T}},n_3}&= \sum _{\pi \in S_3} \int _0^t \int _0^{t_1} \int _0^{t_2} \, {\mathrm {d}}W_{t_3}^{{\scriptscriptstyle {T}},n_{\pi (3)}} {\mathrm {d}}W_{t_2}^{{\scriptscriptstyle {T}},n_{\pi (2)}} {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_{\pi (1)}} \nonumber \\&\quad + \frac{1}{2} \sum _{\pi \in S_3} \delta _{n_{\pi (1)}+n_{\pi (2)}=0} \frac{\rho ^{\scriptscriptstyle {T}}_{t}(n_{\pi (1)})^2}{\langle n_{\pi (1)}\rangle ^2} W_{t}^{{\scriptscriptstyle {T}},n_{\pi (3)}}, \end{aligned}$$
(2.19)
$$\begin{aligned} W_{t}^{{\scriptscriptstyle {T}},n_1} W_{t}^{{\scriptscriptstyle {T}},n_2} W_{t}^{{\scriptscriptstyle {T}},n_3} W_{t}^{{\scriptscriptstyle {T}},n_4}&= \sum _{\pi \in S_4} \int _0^t \int _0^{t_1} \int _0^{t_2} \int _0^{t_3} \, {\mathrm {d}}W_{t_4}^{{\scriptscriptstyle {T}},n_{\pi (4)}} {\mathrm {d}}W_{t_3}^{{\scriptscriptstyle {T}},n_{\pi (3)}} {\mathrm {d}}W_{t_2}^{{\scriptscriptstyle {T}},n_{\pi (2)}} {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_{\pi (1)}} \nonumber \\&\quad + \frac{1}{4} \sum _{\pi \in S_4} \delta _{n_{\pi (1)}+n_{\pi (2)}=0} \frac{\rho ^{\scriptscriptstyle {T}}_{t}(n_{\pi (1)})^2}{\langle n_{\pi (1)} \rangle ^2} W_{t}^{{\scriptscriptstyle {T}},n_{\pi (3)}} W_{t}^{{\scriptscriptstyle {T}},n_{\pi (4)}} \nonumber \\&\quad -\frac{1}{8} \sum _{\pi \in S_4} \delta _{n_{\pi (1)}+n_{\pi (2)}=n_{\pi (3)}+n_{\pi (4)}=0} \frac{\rho ^{\scriptscriptstyle {T}}_{t}(n_{\pi (1)})^2}{\langle n_{\pi (1)} \rangle ^2}\frac{\rho ^{\scriptscriptstyle {T}}_{t}(n_{\pi (3)})^2}{\langle n_{\pi (3)} \rangle ^2}. \end{aligned}$$
(2.20)

The integrals in (2.16)–(2.19) are iterated Itô integrals. This lemma is related to the product formula for multiple stochastic integrals, see e.g. [34, Proposition 1.1.3].

Proof

The first equation (2.16) follows from the definition of the Itô derivative \( {\mathrm {d}}W_t^{n} \).

The second equation (2.17) follows from Itô’s product formula. Indeed, we have that

$$\begin{aligned} W_{t}^{{\scriptscriptstyle {T}},n_1} W_{t}^{{\scriptscriptstyle {T}},n_2}&=\int _0^t W_{s}^{{\scriptscriptstyle {T}},n_2} {\mathrm {d}}W_{s}^{{\scriptscriptstyle {T}},n_1}+ \int _0^t W_{s}^{{\scriptscriptstyle {T}},n_1} {\mathrm {d}}W_{s}^{{\scriptscriptstyle {T}},n_2} + \int _0^t {\mathrm {d}}\langle W_{}^{{\scriptscriptstyle {T}},n_1}, W_{}^{{\scriptscriptstyle {T}},n_2} \rangle _s \\&= \int _0^t \Big ( \int _0^s {\mathrm {d}}W_{\tau }^{{\scriptscriptstyle {T}},n_2} \Big ) {\mathrm {d}}W_{s}^{{\scriptscriptstyle {T}},n_1} \\&\quad + \int _0^t \Big ( \int _0^s {\mathrm {d}}W_{\tau }^{{\scriptscriptstyle {T}},n_1} \Big ) {\mathrm {d}}W_{s}^{{\scriptscriptstyle {T}},n_2} \\&\quad + \delta _{n_1+n_2 =0} \int _0^t \frac{\sigma ^{\scriptscriptstyle {T}}_{s}(n_1)^2}{\langle n_1 \rangle ^2} \, {\mathrm {d}}s\\&= \sum _{\pi \in S_2} \int _0^t \int _0^{t_1} \, {\mathrm {d}}W_{t_2}^{{\scriptscriptstyle {T}},n_{\pi (2)}} {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_{\pi (1)}} + \delta _{n_1+n_2=0} \frac{\rho ^{\scriptscriptstyle {T}}_{t}(n_1)^2}{\langle n_1\rangle ^2}. \end{aligned}$$

The third equation (2.18) follows from Itô’s formula and the second equation (2.17). Using Itô’s formula, we have that

$$\begin{aligned}&W_{t}^{{\scriptscriptstyle {T}},n_1} W_{t}^{{\scriptscriptstyle {T}},n_2} W_{t}^{{\scriptscriptstyle {T}},n_3} \\&\quad = \frac{1}{2} \sum _{\pi \in S_3} \int _0^t W_{s}^{{\scriptscriptstyle {T}},n_{\pi (3)}} W_{s}^{{\scriptscriptstyle {T}},n_{\pi (2)}} {\mathrm {d}}W_{s}^{{\scriptscriptstyle {T}},n_{\pi (1)}} \\&\qquad + \frac{1}{2} \sum _{\pi \in S_3} \int _0^t W_{s}^{{\scriptscriptstyle {T}},n_{\pi (3)}} {\mathrm {d}}\langle W_{}^{{\scriptscriptstyle {T}},n_{\pi (2)}}, W_{}^{{\scriptscriptstyle {T}},n_{\pi (1)}} \rangle _s . \end{aligned}$$

The easiest way to keep track of the pre-factors throughout the proof is to compare the number of terms of each type and the cardinality of the symmetric group. In the formula above, we have three terms of each type and the cardinality \( \# S_3 = 6 \), so we need the pre-factor 1/2. By inserting the second equation (2.17) and our expression for the cross-variation, we obtain

$$\begin{aligned}&W_{t}^{{\scriptscriptstyle {T}},n_1} W_{t}^{{\scriptscriptstyle {T}},n_2} W_{t}^{{\scriptscriptstyle {T}},n_3} \\&\quad = \sum _{\pi \in S_3} \int _0^t \int _0^{t_1} \int _0^{t_2} \, {\mathrm {d}}W_{t_3}^{{\scriptscriptstyle {T}},n_{\pi (3)}} {\mathrm {d}}W_{t_2}^{{\scriptscriptstyle {T}},n_{\pi (2)}} {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_{\pi (1)}}\\&\qquad + \frac{1}{2} \sum _{\pi \in S_3} \delta _{n_{\pi (3)}+n_{\pi (2)}=0} \int _0^t \frac{\rho ^{\scriptscriptstyle {T}}_{s}(n_{\pi (2)})^2}{\langle n_{\pi (2)} \rangle ^2} {\mathrm {d}}W_{s}^{{\scriptscriptstyle {T}},n_{\pi (1)}} \\&\qquad + \frac{1}{2} \sum _{\pi \in S_3} \delta _{n_{\pi (1)}+n_{\pi (2)}=0} \int _0^t \frac{ \sigma ^{\scriptscriptstyle {T}}_{s}(n_{\pi (1)})^2}{\langle n_{\pi (1)}\rangle ^2} W_{s}^{{\scriptscriptstyle {T}},n_{\pi (3)}} \, {\mathrm {d}}s\\&\quad = \sum _{\pi \in S_3} \int _0^t \int _0^{t_1} \int _0^{t_2} \, {\mathrm {d}}W_{t_3}^{{\scriptscriptstyle {T}},n_{\pi (3)}} {\mathrm {d}}W_{t_2}^{{\scriptscriptstyle {T}},n_{\pi (2)}} {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_{\pi (1)}} \\&\qquad + \frac{1}{2} \sum _{\pi \in S_3} \delta _{n_{\pi (1)}+n_{\pi (2)}=0} \int _0^t \bigg ( \frac{ \sigma ^{\scriptscriptstyle {T}}_{s}(n_{\pi (1)})^2}{\langle n_{\pi (1)}\rangle ^2} W_{s}^{{\scriptscriptstyle {T}},n_{\pi (3)}} \, {\mathrm {d}}s+ \frac{ \rho ^{\scriptscriptstyle {T}}_{s}(n_{\pi (1)})^2}{\langle n_{\pi (1)}\rangle ^2} {\mathrm {d}}W_{s}^{{\scriptscriptstyle {T}},n_{\pi (3)}}\bigg )\\&\quad = \sum _{\pi \in S_3} \int _0^t \int _0^{t_1} \int _0^{t_2} \, {\mathrm {d}}W_{t_3}^{{\scriptscriptstyle {T}},n_{\pi (3)}} {\mathrm {d}}W_{t_2}^{{\scriptscriptstyle {T}},n_{\pi (2)}} {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_{\pi (1)}} \\&\qquad + \frac{1}{2} \sum _{\pi \in S_3} \delta _{n_{\pi (1)}+n_{\pi (2)}=0} \frac{\rho ^{\scriptscriptstyle {T}}_{t}(n_{\pi (1)})^2}{\langle n_{\pi (1)}\rangle ^2} W_{t}^{{\scriptscriptstyle {T}},n_{\pi (3)}}. \end{aligned}$$

For the second equality, we also used the permutation invariance of any sum over \( \pi \in S_3 \). This completes the proof of the third equation (2.18).

We now prove the fourth and final equation (2.19). The argument differs from the proof of the third equation only in its notational complexity. Using Itô’s formula and the third equation (2.18), we obtain that

$$\begin{aligned}&W_{t}^{{\scriptscriptstyle {T}},n_1} W_{t}^{{\scriptscriptstyle {T}},n_2} W_{t}^{{\scriptscriptstyle {T}},n_3} W_{t}^{{\scriptscriptstyle {T}},n_4} \\&\quad = \frac{1}{6} \sum _{\pi \in S_4} \int _0^t W_{s}^{{\scriptscriptstyle {T}},n_{\pi (4)}} W_{s}^{{\scriptscriptstyle {T}},n_{\pi (3)}} W_{s}^{{\scriptscriptstyle {T}},n_{\pi (2)}} {\mathrm {d}}W_{s}^{{\scriptscriptstyle {T}},n_{\pi (1)}}\\&\qquad + \frac{1}{4} \sum _{\pi \in S_4} \int _0^t W_{s}^{{\scriptscriptstyle {T}},n_{\pi (4)}} W_{s}^{{\scriptscriptstyle {T}},n_{\pi (3)}} {\mathrm {d}}\langle W_{}^{{\scriptscriptstyle {T}},n_{\pi (2)}} , W_{}^{{\scriptscriptstyle {T}},n_{\pi (1)}} \rangle _s \\&\quad = \sum _{\pi \in S_4} \int _0^t \int _0^{t_1} \int _0^{t_2} \int _0^{t_3} \, {\mathrm {d}}W_{t_4}^{{\scriptscriptstyle {T}},n_{\pi (4)}} {\mathrm {d}}W_{t_3}^{{\scriptscriptstyle {T}},n_{\pi (3)}} {\mathrm {d}}W_{t_2}^{{\scriptscriptstyle {T}},n_{\pi (2)}} {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_{\pi (1)}} \\&\qquad + \frac{1}{2} \sum _{\pi \in S_4} \frac{\delta _{n_{\pi (1)}+n_{\pi (2)}=0}}{\langle n_{\pi (1)} \rangle ^2} \int _0^t \rho ^{\scriptscriptstyle {T}}_{s}(n_{\pi (1)})^2 W_{s}^{{\scriptscriptstyle {T}},n_{\pi (4)}} {\mathrm {d}}W_{s}^{{\scriptscriptstyle {T}},n_{\pi (3)}} \\&\qquad + \frac{1}{4} \sum _{\pi \in S_4} \int _0^t \sigma ^{\scriptscriptstyle {T}}_{s}(n_{\pi (1)})^2 W_{s}^{{\scriptscriptstyle {T}},n_{\pi (4)}} W_{s}^{{\scriptscriptstyle {T}},n_{\pi (3)}} \, {\mathrm {d}}s\\&\quad = \sum _{\pi \in S_4} \int _0^t \int _0^{t_1} \int _0^{t_2} \int _0^{t_3} \, {\mathrm {d}}W_{t_4}^{{\scriptscriptstyle {T}},n_{\pi (4)}} {\mathrm {d}}W_{t_3}^{{\scriptscriptstyle {T}},n_{\pi (3)}} {\mathrm {d}}W_{t_2}^{{\scriptscriptstyle {T}},n_{\pi (2)}} {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_{\pi (1)}} \\&\qquad + \frac{1}{4} \sum _{\pi \in S_4} \bigg [ \frac{\delta _{n_{\pi (1)}+n_{\pi (2)}=0}}{\langle n_{\pi (1)} \rangle ^2} \\&\qquad \times \int _0^t \bigg ( \sigma ^{\scriptscriptstyle {T}}_{s}(n_{\pi (1)})^2 W_{s}^{{\scriptscriptstyle {T}},n_{\pi (4)}} W_{s}^{{\scriptscriptstyle {T}},n_{\pi (3)}} \, {\mathrm {d}}s+ \rho ^{\scriptscriptstyle {T}}_{s}(n_{\pi (1)})^2 W_{s}^{{\scriptscriptstyle {T}},n_{\pi (4)}} {\mathrm {d}}W_{s}^{{\scriptscriptstyle {T}},n_{\pi (3)}}\\&\qquad + \rho ^{\scriptscriptstyle {T}}_{s}(n_{\pi (1)})^2W_{s}^{{\scriptscriptstyle {T}},n_{\pi (3)}} {\mathrm {d}}W_{s}^{{\scriptscriptstyle {T}},n_{\pi (4)}} \bigg ) \bigg ]. \end{aligned}$$

Using Itô’s formula, we obtain that

$$\begin{aligned} \begin{aligned}&\int _0^t \bigg ( \sigma ^{\scriptscriptstyle {T}}_{s}(n_{\pi (1)})^2 W_{s}^{{\scriptscriptstyle {T}},n_{\pi (4)}} W_{s}^{{\scriptscriptstyle {T}},n_{\pi (3)}} \, {\mathrm {d}}s+ \rho ^{\scriptscriptstyle {T}}_{s}(n_{\pi (1)})^2 W_{s}^{{\scriptscriptstyle {T}},n_{\pi (4)}} {\mathrm {d}}W_{s}^{{\scriptscriptstyle {T}},n_{\pi (3)}} \\&\qquad + \rho ^{\scriptscriptstyle {T}}_{s}(n_{\pi (1)})^2 W_{s}^{{\scriptscriptstyle {T}},n_{\pi (3)}} {\mathrm {d}}W_{s}^{{\scriptscriptstyle {T}},n_{\pi (4)}} \bigg ) \\&\quad = \rho ^{\scriptscriptstyle {T}}_{t}(n_{\pi (1)})^2 W_{t}^{{\scriptscriptstyle {T}},n_{\pi (3)}} W_{t}^{{\scriptscriptstyle {T}},n_{\pi (4)}} - \delta _{n_{\pi (3)}+n_{\pi (4)}=0} \int _0^t \rho ^{\scriptscriptstyle {T}}_{s}(n_{\pi (1)})^2 \frac{\sigma ^{\scriptscriptstyle {T}}_{s}(n_{\pi (3)})^2}{\langle n_{\pi (3)}\rangle ^2} \, {\mathrm {d}}s. \end{aligned} \end{aligned}$$

The total contribution of the second summand is

$$\begin{aligned}&- \frac{1}{4} \sum _{\pi \in S_4} \frac{\delta _{n_{\pi (1)}+n_{\pi (2)}=n_{\pi (3)}+n_{\pi (4)}=0}}{\langle n_{\pi (1)} \rangle ^2 \langle n_{\pi (3)} \rangle ^2} \int _0^t \rho ^{\scriptscriptstyle {T}}_{s}(n_{\pi (1)})^2 \sigma ^{\scriptscriptstyle {T}}_{s}(n_{\pi (3)})^2 \, {\mathrm {d}}s\\&\quad =-\frac{1}{8} \sum _{\pi \in S_4} \frac{\delta _{n_{\pi (1)}+n_{\pi (2)}=n_{\pi (3)}+n_{\pi (4)}=0}}{\langle n_{\pi (1)} \rangle ^2 \langle n_{\pi (3)} \rangle ^2} \int _0^t \\&\qquad \times \Big ( \rho ^{\scriptscriptstyle {T}}_{s}(n_{\pi (1)})^2 \sigma ^{\scriptscriptstyle {T}}_{s}(n_{\pi (3)})^2 + \sigma ^{\scriptscriptstyle {T}}_{s}(n_{\pi (1)})^2 \rho ^{\scriptscriptstyle {T}}_{s}(n_{\pi (3)})^2 \Big ) \, {\mathrm {d}}s\\&\quad =-\frac{1}{8} \sum _{\pi \in S_4} \delta _{n_{\pi (1)}+n_{\pi (2)}=n_{\pi (3)}+n_{\pi (4)}=0} \frac{\rho ^{\scriptscriptstyle {T}}_{t}(n_{\pi (1)})^2}{\langle n_{\pi (1)} \rangle ^2}\frac{\rho ^{\scriptscriptstyle {T}}_{t}(n_{\pi (3)})^2}{\langle n_{\pi (3)} \rangle ^2}. \end{aligned}$$

This completes the proof of the fourth equation (2.19). \(\square \)

Definition 2.6

(Renormalization). We define the renormalization constants \( a^{\scriptscriptstyle {T}}_t, b^{\scriptscriptstyle {T}}_t\in {\mathbb {R}}\) and the multiplier \( {\mathcal {M}}^{\scriptscriptstyle {T}}_t:L^2({\mathbb {T}}^3) \rightarrow L^2({\mathbb {T}}^3) \) by

$$\begin{aligned} a^{\scriptscriptstyle {T}}_t\overset{\text {def}}{=}\sum _{n\in {\mathbb {Z}}^3} \frac{\rho ^{\scriptscriptstyle {T}}_{t}(n)^2}{\langle n\rangle ^2}, \qquad b^{\scriptscriptstyle {T}}_t\overset{\text {def}}{=}\sum _{n_1,n_2\in {\mathbb {Z}}^3} \frac{{\widehat{V}}(n_1+n_2) \rho ^{\scriptscriptstyle {T}}_{t}(n_1)^2 \rho ^{\scriptscriptstyle {T}}_{t}(n_2)^2}{\langle n_1 \rangle ^2 \langle n_2 \rangle ^2} \end{aligned}$$

and

$$\begin{aligned} \widehat{{\mathcal {M}}^{\scriptscriptstyle {T}}_tf}(n) \overset{\text {def}}{=}\Big ( \sum _{m \in {\mathbb {Z}}^3} {\widehat{V}}(n+m)\frac{\rho ^{\scriptscriptstyle {T}}_{t}(m)^2}{\langle m \rangle ^2} \Big ) {\widehat{f}}(n). \end{aligned}$$

Using this notation, we set

$$\begin{aligned} :f^2 :&\overset{def}{=} f^2 -a^{\scriptscriptstyle {T}}_t, \end{aligned}$$
(2.21)
$$\begin{aligned} :(V * f^2) f :&\overset{def}{=} (V* f^2) f - a^{\scriptscriptstyle {T}}_t{\widehat{V}}(0) f - 2 {\mathcal {M}}^{\scriptscriptstyle {T}}_tf, \end{aligned}$$
(2.22)
$$\begin{aligned} :(V*f^2) f^2 :&\overset{def}{=} (V*f^2) f^2 - a^{\scriptscriptstyle {T}}_tV * f^2 - a^{\scriptscriptstyle {T}}_t{\widehat{V}}(0) f^2 - 4 ({\mathcal {M}}^{\scriptscriptstyle {T}}_tf) f \nonumber \\&\qquad + (a^{\scriptscriptstyle {T}}_t)^2 {\widehat{V}}(0) + 2 b^{\scriptscriptstyle {T}}_t. \end{aligned}$$
(2.23)

Remark 2.7

As is clear from the definition, the renormalized powers in (2.20), (2.21), and (2.22) depend on the regularization parameter t. This dependence will always be clear from context and we thus do not reflect it in our notation.

Definition 2.8

(Renormalization of the dynamics). For any \( N \ge 1 \), we define

$$\begin{aligned} a_N \overset{\text {def}}{=}a^N_\infty = a^\infty _N , \quad b_N \overset{\text {def}}{=}b^N_\infty = b^\infty _N, \quad \text {and} \quad {\mathcal {M}}_N \overset{\text {def}}{=}{\mathcal {M}}^N_\infty = {\mathcal {M}}^\infty _N. \end{aligned}$$
(2.24)

Throughout most of the paper, we will only work with the renormalization constants from Definition 2.6, which contain two finite parameters. The renormalization constants in Definition 2.8 will be more important in the second part of this series.

Proposition 2.9

(Stochastic integral representation of renormalized powers). With \( n_{12}, n_{123},\) and \( n_{1234} \) defined as in (1.12), we have that

(2.25)
(2.26)
(2.27)

Furthermore, it holds that

(2.28)

Remark 2.10

The ”lower-order” terms in Definition 2.6 were chosen precisely to obtain the result in Proposition 2.9. The renormalized powers of can be represented solely using iterated stochastic integrals, which have many desirable properties.

Proposition 2.9 essentially follows from Lemma 2.5, Definition 2.6, and a tedious calculation. For the sake of completeness, however, we provide full details.

Proof

We first prove (2.24). Using (2.17), we have that

By subtracting \( a^{\scriptscriptstyle {T}}_t\) from both sides and symmetrizing, this leads to the desired identity.

We now turn to the proof of (2.25). From (2.18), we obtain that

After symmetrizing and comparing with Definition 2.6, this leads to the desired identity. Next, we prove the identity (2.26). Using (2.19), we have that

(2.29)
(2.30)

It remains to simplify (2.28) and (2.29). Regarding (2.28), we have that

Regarding (2.29), we note that

$$\begin{aligned}&-\frac{1}{8} \sum _{\pi \in S_4}\sum _{n_1,n_2,n_3,n_4\in {\mathbb {Z}}^3} {\widehat{V}}(n_1+n_2) e^{i \langle n_{1234} ,x \rangle } \delta _{n_{\pi (1)}+n_{\pi (2)}=n_{\pi (3)}+n_{\pi (4)}=0}\\&\qquad \frac{\rho ^{\scriptscriptstyle {T}}_{t}(n_{\pi (1)})^2}{\langle n_{\pi (1)} \rangle ^2}\frac{\rho ^{\scriptscriptstyle {T}}_{t}(n_{\pi (3)})^2}{\langle n_{\pi (3)} \rangle ^2} \\&\quad = - \sum _{n_1,n_3 \in {\mathbb {Z}}^3} {\widehat{V}}(0) \frac{\rho ^{\scriptscriptstyle {T}}_{t}(n_1)^2 \rho ^{\scriptscriptstyle {T}}_{t}(n_3)^2}{\langle n_1 \rangle ^2 \langle n_3 \rangle ^2} - 2 \sum _{n_1,n_2 \in {\mathbb {Z}}^3} \frac{ {\widehat{V}}(n_1+n_2) \rho ^{\scriptscriptstyle {T}}_{t}(n_1)^2) \rho ^{\scriptscriptstyle {T}}_{t}(n_2)^2}{\langle n_1 \rangle ^2 \langle n_2 \rangle ^2} \\&\quad = - {\widehat{V}}(0) (a^{\scriptscriptstyle {T}}_t)^2 - 2 b^{\scriptscriptstyle {T}}_t. \end{aligned}$$

After symmetrizing, this completes the proof of (2.26).

Finally, it remains to prove (2.27). Since V is real-valued and even, we have that \( {\widehat{V}}(n) = \overline{{\widehat{V}}(n)} = {\widehat{V}}(-n) \). As long as \( n_{1234}=0 \), this implies

$$\begin{aligned} \sum _{\pi \in S_4} {\widehat{V}}(n_{\pi (1)}+n_{\pi (2)}) = 4 \sum _{\pi \in S_3} {\widehat{V}}(n_{\pi (1)}+n_{\pi (2)}). \end{aligned}$$
(2.31)

Using (2.30), (2.27) follows after inserting (2.25) and (2.26) into the two sides of the identity. \(\square \)

Like the monomials and Hermite polynomials (further discussed below), the generalized and renormalized powers in Definition 2.6 satisfy a binomial formula.

Lemma 2.11

(Binomial formula). For any \( f \in H^1({\mathbb {T}}^3) \), we have the binomial formulas

(2.32)

and

(2.33)

Remark 2.12

Overall, the terms in (2.32) obey better analytical estimates than their counterparts for the \( \Phi ^4_3\)-model in [6]. However, their algebraic structure is more complicated. The most challenging term is

which requires a delicate random matrix estimate (Sect. 3.3).

Proof of Lemma 2.11

This follows from Definition 2.6 and the classical binomial formula. For the quartic binomial formula (2.32), we also used the self-adjointness of the convolution with V and the multiplier \( {\mathcal {M}}^{\scriptscriptstyle {T}}_t\). \(\square \)

While this is not reflected in our notation, it is clear from Definition 2.6 that the multiplier \( {\mathcal {M}}^{\scriptscriptstyle {T}}_t\) depends linearly on the interaction potential V. In the proof of the random matrix estimate (Proposition 3.7), we will need to further decompose \( {\mathcal {M}}^{\scriptscriptstyle {T}}_t\), both with respect to the interaction potential V and dyadic frequency blocks. We introduce the notation corresponding to this decomposition in the next definition.

Definition 2.13

We let \( {\mathcal {M}}^{\scriptscriptstyle {T}}_t[V;N_1,N_2] \) be the Fourier multiplier corresponding to the symbol

$$\begin{aligned} n \mapsto \sum _{k\in {\mathbb {Z}}^3} \frac{{\widehat{V}}(n+k)}{\langle k \rangle ^2} \chi _{N_1}(k) \chi _{N_2}(k) \rho ^{\scriptscriptstyle {T}}_{t}(k)^2. \end{aligned}$$
(2.34)

In the next definition, we define our last renormalization of a stochastic object.

Definition 2.14

We define the correlation function on \( {\mathbb {T}}^3 \) by

$$\begin{aligned} {\mathfrak {C}}^T_t[N_1,N_2](y) \overset{\text {def}}{=}\sum _{k\in {\mathbb {Z}}^3} \frac{\chi _{N_1}(k) \chi _{N_2}(k)}{\langle k\rangle ^2} \rho ^{\scriptscriptstyle {T}}_{t}(k)^2 e^{i\langle k,y \rangle }. \end{aligned}$$
(2.35)

We further define

(2.36)

Here, \( \tau _y \) denotes the translation operator \( \tau _y f(x) = f(x-y) \).

The next lemma relates the multiplier and correlation function from Definitions 2.13 and 2.14, respectively.

Lemma 2.15

(Physical space representation of \( {\mathcal {M}}^{\scriptscriptstyle {T}}_t\)). For any \( f \in C^\infty _x({\mathbb {T}}^3) \), we have that

$$\begin{aligned} {\mathcal {M}}^{\scriptscriptstyle {T}}_t[V;N_1,N_2] f = \big ( {\mathfrak {C}}^T_t[N_1,N_2]V\big ) * f. \end{aligned}$$
(2.37)

Proof

By definition of the multiplier \( {\mathcal {M}}^{\scriptscriptstyle {T}}_t[V;N_1,N_2] \) and since

$$\begin{aligned} k \mapsto \frac{1}{\langle k \rangle ^2} \chi _{N_1}(k) \chi _{N_2}(k) \rho ^{\scriptscriptstyle {T}}_{t}(k)^2 \end{aligned}$$
(2.38)

is even, the symbol in (2.33) is the convolution of \( {\widehat{V}} \) with (2.37). As a result, the sequence \({n\mapsto {\mathcal {M}}^{\scriptscriptstyle {T}}_t[V;N_1,N_2](n)}\) has the inverse Fourier transform is given by

$$\begin{aligned} \Big ( \sum _{k\in {\mathbb {Z}}^3} \frac{\chi _{N_1}(k) \chi _{N_2}(k) }{\langle k \rangle ^2} \rho ^{\scriptscriptstyle {T}}_{t}(k)^2 e^{i\langle k , x \rangle } \Big ) V(x) = {\mathfrak {C}}^T_t[N_1,N_2](x) V(x). \end{aligned}$$

\(\square \)

In Lemma 2.5, Proposition 2.9, Lemmas 2.11 and 2.15, we have dealt with the algebraic structure of stochastic objects. We now move from algebraic aspects towards analytic estimates. In the following lemmas, we show that several stochastic objects are well-defined and study their regularities.

Lemma 2.16

(Stochastic objects I). For every \( p \ge 1 \), \( \epsilon >0 \), and every \( 0< \gamma < \min (\beta ,1)\), we have that

(2.39)
(2.40)
(2.41)

Furthermore, as \( t\rightarrow \infty \) and/or \( T \rightarrow \infty \), the stochastic objects , , and converge in their respective spaces indicated by (2.38)–(2.40).

Remark 2.17

The statement and proof of Lemma 2.16 are standard and the respective regularities can be deduced by simple “power-counting”. Nevertheless, we present the proof to familiarize the reader with our set-up and as a warm-up for Lemma 2.20 below.

Proof

The first step in the proofs of (2.38)–(2.40) is a reduction to an estimate in \( L^2(\Omega \times {\mathbb {T}}^3) \) using Gaussian hypercontractivity. We provide the full details of this step for (2.38), but will omit similar details in the remaining estimates (2.39)–(2.40).

Let \( N \ge 1 \) and let \( q=q(\epsilon )\ge 1 \) be sufficiently large. By using Hölder’s inequality in \( \omega \in \Omega \), it suffices to prove the estimates for \( p \ge q \). Using Bernstein’s inequality and Minkowski’s integral inequality, we obtain

By Gaussian hypercontractivity (Lemma A.1), we obtain that

Since the distribution of is translation invariant, the function is constant. We can then replace \( L_x^q({\mathbb {T}}^3) \) by \( L_x^2({\mathbb {T}}^3) \) and obtain

In order to prove (2.38), it therefore remains to show uniformly in \( T,t \ge 0 \) that

(2.42)

Using Proposition 2.9, the orthogonality of the iterated stochastic integrals, and Itô’s isometry, we have that

This completes the proof of (2.38). The estimate (2.39) can be deduced from the smoothing properties of V or by repeating the exact same argument. It remains to prove (2.40), which can be reduced using hypercontractivity (and the room in \( \gamma \)) to the estimate

Using Proposition 2.9, the orthogonality of the iterated stochastic integrals, and Itô’s isometry, we have that

By first summing in \( n_3 \), using that \( 3 - 2 \gamma > 1 \), and then in \( n_1 \) and \(n_2 \), using \( \gamma < \beta \), we obtain

$$\begin{aligned}&\sum _{ n_1,n_2,n_3 \in {\mathbb {Z}}^3} \frac{1}{\langle n_1 + n_2 + n_3 \rangle ^{3-2\gamma }} \frac{1}{\langle n_1+n_2 \rangle ^{2\beta }}\frac{1}{\langle n_1\rangle ^2 \langle n_2 \rangle ^2 \langle n_3 \rangle ^2} \\&\quad \lesssim \sum _{n_1,n_2 \in {\mathbb {Z}}^3} \frac{1}{\langle n_1 + n_2\rangle ^{2+ 2 (\beta -\gamma )}} \frac{1}{\langle n_1 \rangle ^2 \langle n_2 \rangle ^2} \lesssim 1. \end{aligned}$$

\(\square \)

We also record the following refinement of (2.40) in Lemma 2.16, which will be needed in the proof of Lemma 2.20 below.

Corollary 2.18

For every \( 0< \gamma < \min (1,\beta ) \) and any \( n \in {\mathbb {Z}}^3 \), we can control the Fourier coefficients of by

(2.43)

Proof

Arguing as in the proof of Lemma 2.16, it suffices to prove that

$$\begin{aligned} \sum _{ \begin{array}{c} n_1,n_2,n_3\in {\mathbb {Z}}^3:\\ n_{123}=n \end{array}} \frac{1}{\langle n_{12}\rangle ^{2\beta } \langle n_1 \rangle ^2 \langle n_2 \rangle ^2 \langle n_3 \rangle ^2} \lesssim \frac{1}{\langle n \rangle ^{2\gamma }}. \end{aligned}$$
(2.44)

Indeed, after parametrizing the sum by \( n_1 \) and \( n_3 \), (2.43) follows from

$$\begin{aligned} \sum _{ \begin{array}{c} n_1,n_2,n_3\in {\mathbb {Z}}^3:\\ n_{123}=n \end{array}} \frac{1}{\langle n_{12}\rangle ^{2\beta } \langle n_1 \rangle ^2 \langle n_2 \rangle ^2 \langle n_3 \rangle ^2}&= \sum _{ n_1, n_3 \in {\mathbb {Z}}^3 } \frac{1}{\langle n-n_3 \rangle ^{2\beta } \langle n_1 \rangle ^2 \langle n-n_1-n_3 \rangle ^2 \langle n_3 \rangle ^2} \\&\lesssim \sum _{ n_3 \in {\mathbb {Z}}^3} \frac{1}{\langle n-n_3 \rangle ^{1+2\beta } \langle n_3 \rangle ^2} \\&\lesssim \langle n \rangle ^{-2\gamma }. \end{aligned}$$

\(\square \)

Lemma 2.19

(Stochastic objects II). For any sufficiently small \( \delta >0\) and any \( N_1,N_2 \ge 1 \), it holds that

(2.45)

Proof

Arguing as in the proof of (2.38) in Lemma 2.16, we have that

(2.46)

It only remains to move the supremum in \( y\in {\mathbb {T}}^3 \) into the expectation. From a crude estimate, we have for all \( y,y^\prime \in {\mathbb {T}}^3 \) that

By Kolmogorov’s continuity theorem (cf. [40, Theorem 4.3.2]), we obtain for any \( 0< \alpha < 1 \) that

Combining this with (2.45) leads to the desired estimate. \(\square \)

The next lemma is similar to Lemma 2.16, but is concerned with more complicated stochastic objects. In order to shorten the argument, we will no longer use Itô’s formula to express products of stochastic integrals. Instead, we will utilize the product formula for multiple stochastic integrals from [34, Proposition 1.1.3]. Before we state the lemma, we follow [5, 6] and define

(2.47)

We emphasize that \( {\mathbb {W}}_t^{\scriptscriptstyle {T},[3]}\) contains the interaction potential V even though this is not reflected in our notation.

Lemma 2.20

(Stochastic objects III). For every \( p \ge 1 \), \( \epsilon >0 \), and every \( 0< \gamma < \min (\beta ,\frac{1}{2})\), we have that

(2.48)
(2.49)
(2.50)

Remark 2.21

The analog of for the \( \Phi ^4_3\)-model in [5] requires a further logarithmic renormalization. In our case, however, the additional smoothing from the interaction potential V eliminates the responsible logarithmic divergence.

Proof

We first prove (2.47), which is (by far) the easiest estimate. As in the proof of Lemma 2.16, we can use Gaussian hypercontractivity (Lemma A.1) to reduce (2.48) to the estimate

$$\begin{aligned} {\mathbb {E}}\Big [ \Vert {\mathbb {W}}_t^{\scriptscriptstyle {T},[3]} \Vert _{H^{\frac{1}{2}+\gamma }_x({\mathbb {T}}^3)}^2 \Big ] \lesssim 1. \end{aligned}$$
(2.51)

The rest of the argument follows from Corollary 2.18 and a deterministic estimate. More precisely, it follows from \( \Vert \sigma ^{\scriptscriptstyle {T}}_{s} \Vert _{L^2_s}= 1 \) that

For a small \( \delta >0 \), we obtain from Corollary 2.18 (with \( \gamma \) replaced by \( \gamma +\delta \)) that

We now turn to the proof of (2.48). Using the same reductions based on Gaussian hypercontractivity as before, it suffices to prove that

(2.52)

We first rewrite as a product of multiple stochastic integrals instead of iterated stochastic integrals. This allows us to use the product formula from Lemma A.4, which leads to a (relatively) simple expression. To simplify the notation below, we define the symmetrization of \( {\widehat{V}}(n_1+n_2) \) by

$$\begin{aligned} {\widehat{V}}_S(n_1,n_2,n_3) = \frac{1}{6} \sum _{\pi \in S_3} {\widehat{V}}(n_{\pi (1)}+n_{\pi (2)}). \end{aligned}$$

From Proposition 2.9, (2.46), and the stochastic Fubini theorem (see [16, Theorem 4.33]), we have that

$$\begin{aligned}&{\mathbb {W}}_t^{\scriptscriptstyle {T},[3]}(x) \\&\quad = \sum _{\begin{array}{c} n_1,n_2,n_3 \in {\mathbb {Z}}^3 \\ \pi \in S_3 \end{array}} \frac{{\widehat{V}}(n_{\pi (1)}+n_{\pi (2)})}{\langle n_{123} \rangle ^2} e^{ i \langle n_{123}, x \rangle } \\&\qquad \times \int _0^t \sigma ^{\scriptscriptstyle {T}}_{s}(n_{123})^2 \Big ( \int _0^s \int _0^{t_1} \int _0^{t_2} {\mathrm {d}}W_{t_3}^{{\scriptscriptstyle {T}},n_3}{\mathrm {d}}W_{t_2}^{{\scriptscriptstyle {T}},n_2} {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_1} \Big ) \, {\mathrm {d}}s\\&\quad = \sum _{n_1,n_2,n_3\in {\mathbb {Z}}^3} \frac{{\widehat{V}}_S(n_1,n_2,n_3)}{\langle n_{123} \rangle ^2} e^{i \langle n_{123}, x \rangle } \\&\qquad \times \int _0^t \int _0^{t_1} \int _0^{t_2} \Big ( \int _{\max (t_1,t_2,t_3)}^t \sigma ^{\scriptscriptstyle {T}}_{s}(n_{123})^2 \, {\mathrm {d}}s\Big ) {\mathrm {d}}W_{t_3}^{{\scriptscriptstyle {T}},n_3}{\mathrm {d}}W_{t_2}^{{\scriptscriptstyle {T}},n_2} {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_1} \end{aligned}$$

We define the symmetric function f by

$$\begin{aligned}&f(t_1,n_1,t_2,n_2,t_3,n_3;t,x) \\&\quad \overset{\text {def}}{=}\frac{{\widehat{V}}_s(n_1,n_2,n_3)}{6 \langle n_{123} \rangle ^2} \Big ( \int _{\max (t_1,t_2,t_3)}^t \sigma ^{\scriptscriptstyle {T}}_{s}(n_{123})^2 \, {\mathrm {d}}s\Big ) e^{i \langle n_{123}, x \rangle } 1\{ 0 \le t_1,t_2,t_3 \le t\}. \end{aligned}$$

where we view both \( t \in {\mathbb {R}}_{>0} \) and \( x \in {\mathbb {T}}^3 \) as fixed parameters. Using the language from Sect. A.2 and Lemma A.2, we obtain that

$$\begin{aligned} {\mathbb {W}}_t^{\scriptscriptstyle {T},[3]}(x) = {\mathcal {I}}_3[f(\cdot ;t,x)], \end{aligned}$$
(2.53)

where \( {\mathcal {I}}_3 \) is a multiple stochastic integral. After defining

$$\begin{aligned} g(t_4,n_4,t_5,n_5;t,x) \overset{\text {def}}{=}{\widehat{V}}(n_4+n_5) e^{i \langle n_{45}, x \rangle } 1\{ 0 \le t_4,t_5 \le t\}, \end{aligned}$$

a similar but easier calculation leads to

(2.54)

By combining (2.52) and (2.53), we obtain that

By using the product formula for multiple stochastic integrals (Lemma A.4), we obtain that

Inserting the definitions of f and g, this leads to

(2.55)

where the Gaussian chaoses \( {\mathcal {G}}_5, {\mathcal {G}}_3 \), and \( {\mathcal {G}}_1 \) are given by

$$\begin{aligned} {\mathcal {G}}_5(t,x)&= \sum _{n_1,\ldots ,n_5 \in {\mathbb {Z}}^3} \frac{{\widehat{V}}(n_{12}) {\widehat{V}}(n_{45})}{\langle n_{123} \rangle ^2} e^{i \langle n_{12345}, x \rangle } \\&\quad \times \int _{[0,t]^5} \Big ( \int _{\max (t_1,t_2,t_3)}^t \sigma ^{\scriptscriptstyle {T}}_{s}(n_{123})\, {\mathrm {d}}s\Big ) {\mathrm {d}}W_{t_5}^{{\scriptscriptstyle {T}},n_5} \ldots {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_1}, \\ {\mathcal {G}}_3(t,x)&= \sum _{n_1,\ldots , n_5 \in {\mathbb {Z}}^3} \bigg [ \delta _{n_{35}=0} \frac{{\widehat{V}}_s(n_1,n_2,n_3) {\widehat{V}}(n_{45})}{\langle n_{123}\rangle ^2 \langle n_3 \rangle ^2} e^{i \langle n_{124}, x \rangle } \\&\quad \times \int _{[0,t]^3} \Big ( \int _0^t \int _{\max (t_1,t_2,t_3)}^t \sigma ^{\scriptscriptstyle {T}}_{t_3}(n_3)^2 \sigma ^{\scriptscriptstyle {T}}_{s}(n_{123})^2 \, {\mathrm {d}}s{\mathrm {d}}t_3 \Big ) {\mathrm {d}}W_{t_4}^{{\scriptscriptstyle {T}},n_4} {\mathrm {d}}W_{t_2}^{{\scriptscriptstyle {T}},n_2} {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_1} \bigg ], \\ {\mathcal {G}}_1(t,x)&= \frac{1}{2} \sum _{n_1,\ldots ,n_5 \in {\mathbb {Z}}^3} \bigg [ \delta _{n_{24}=n_{35}=0} \frac{{\widehat{V}}_s(n_1,n_2,n_3) {\widehat{V}}(n_{45})}{\langle n_{123}\rangle ^2 \langle n_2 \rangle ^2 \langle n_3 \rangle ^2} e^{i \langle n_{1}, x \rangle } \\&\quad \times \int _{[0,t]}\Big ( \int _0^t \int _0^t \int _{\max (t_1,t_2,t_3)}^t \sigma ^{\scriptscriptstyle {T}}_{t_2}(n_2)^2\sigma ^{\scriptscriptstyle {T}}_{t_3}(n_3)^2 \sigma ^{\scriptscriptstyle {T}}_{s}(n_{123})^2 \, {\mathrm {d}}s{\mathrm {d}}t_3 {\mathrm {d}}t_2 \Big ) {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_1} \bigg ] \end{aligned}$$

Using the \( L^2\)-orthogonality of the multiple stochastic integrals together with \( \Vert \sigma ^{\scriptscriptstyle {T}}_{s}\Vert _{L^2_s({\mathbb {R}}_{>0})} \le 1 \), we obtain that

(2.56)
(2.57)
(2.58)

The estimates of the sums (2.55)–(2.57) follow from standard arguments. We present the details for (2.55) and (2.57), but omit the details for the intermediate term (2.56).

We start with the estimate of (2.55). The interaction with \( n_1,n_2,n_3 \) at low frequency scales and \( n_4,n_5 \) at high frequency scales is worse than all other contributions, so there is a lot of room in several steps below. Using Lemma B.6 for the sum in \( n_5 \), which requires \( \gamma < \min (1,\beta ) \), and summing in \( n_4\), we obtain for a small \( \delta >0 \) that

$$\begin{aligned}&\sum _{n_1,n_2,n_3,n_4,n_5 \in {\mathbb {Z}}^3} \langle n_{12345} \rangle ^{-2+2\gamma } \langle n_{123} \rangle ^{-4} |{\widehat{V}}(n_{12})|^2 |{\widehat{V}}(n_{45})|^2 \prod _{j=1}^5 \langle n_j \rangle ^{-2} \\&\quad \lesssim \sum _{n_1,n_2,n_3,n_4\in {\mathbb {Z}}^3} \langle n_{123} \rangle ^{-4} \langle n_{12}\rangle ^{-2\beta } \Big (\prod _{j=1}^4 \langle n_j \rangle ^{-2} \Big ) \Big ( \sum _{n_5 \in {\mathbb {Z}}^3} \langle n_{1234} + n_5 \rangle ^{-2+2\gamma } \langle n_4 +n_5 \rangle ^{-2 \beta } \langle n_5 \rangle ^{-2} \Big ) \\&\quad \lesssim \sum _{n_1,n_2,n_3 \in {\mathbb {Z}}^3} \langle n_{123} \rangle ^{-4} \langle n_{12} \rangle ^{-2\beta } \Big (\prod _{j=1}^3 \langle n_j \rangle ^{-2} \Big ) \Big ( \sum _{n_4\in {\mathbb {Z}}^3} \big ( \langle n_{1234} \rangle ^{-1-\delta } + \langle n_4 \rangle ^{-1-\delta } \big ) \langle n_4 \rangle ^{-2} \Big ) \\&\quad \lesssim \sum _{n_1,n_2,n_3 \in {\mathbb {Z}}^3} \langle n_{123} \rangle ^{-4} \langle n_{12} \rangle ^{-2\beta } \prod _{j=1}^3 \langle n_j \rangle ^{-2} . \end{aligned}$$

Summing in \( n_3 \), \(n_2 \), and \( n_1 \), we obtain that

$$\begin{aligned}&\sum _{n_1,n_2,n_3 \in {\mathbb {Z}}^3} \langle n_{123} \rangle ^{-4} \langle n_{12} \rangle ^{-2\beta } \prod _{j=1}^3 \langle n_j \rangle ^{-2}\\&\quad \lesssim \sum _{n_1,n_2 \in {\mathbb {Z}}^3} \langle n_{12} \rangle ^{-3-2\beta } \langle n_1 \rangle ^{-2} \langle n_2 \rangle ^{-2} \lesssim \sum _{n_1 \in {\mathbb {Z}}^3} \langle n_1 \rangle ^{-4} \lesssim 1. \end{aligned}$$

We now turn to (2.57), which corresponds to double probabilistic resonance. We emphasize that this term would be unbounded without smoothing effect of the potential V, which is the reason for the additional renormalization in the \( \Phi ^4_3\)-model, see e.g. [5, Lemma 24]. Using Lemma B.6 for the sum in \( n_3 \), we obtain that

$$\begin{aligned}&\sum _{n_1 \in {\mathbb {Z}}^3} \langle n_1 \rangle ^{-4+2\gamma } \Big ( \sum _{n_2,n_3 \in {\mathbb {Z}}^3} \langle n_{123} \rangle ^{-2} |{\widehat{V}}_s(n_1,n_2,n_3)| |{\widehat{V}}(n_{23})| \langle n_2 \rangle ^{-2} \langle n_3 \rangle ^{-2} \Big )^2 \\&\quad \lesssim \sum _{n_1 \in {\mathbb {Z}}^3} \langle n_1 \rangle ^{-4+2\gamma } \Big ( \sum _{n_2,n_3 \in {\mathbb {Z}}^3} \langle n_{123} \rangle ^{-2} \langle n_{23} \rangle ^{-\beta } \langle n_2 \rangle ^{-2} \langle n_3 \rangle ^{-2} \Big )^2 \\&\quad \lesssim \sum _{n_1 \in {\mathbb {Z}}^3} \langle n_1 \rangle ^{-4+2\gamma } \Big ( \sum _{n_2 \in {\mathbb {Z}}^3} \big ( \langle n_{12} \rangle ^{-1-\beta } + \langle n_2 \rangle ^{-1-\beta } \big ) \langle n_2 \rangle ^{-2} \Big )^2 \\&\quad \lesssim \sum _{n_1 \in {\mathbb {Z}}^3} \langle n_1 \rangle ^{-4+2\gamma } \lesssim 1, \end{aligned}$$

provided that \( \gamma < 1/2 \). This completes the proof of (2.48).

We now turn to the proof of (2.49). This stochastic object has a more complicated algebraic structure than the stochastic object in (2.48), but a similar analytic behavior. From the definition of \( {\mathcal {M}}^{\scriptscriptstyle {T}}_t\), we obtain that

Using the variable names \( m_1,m_4,m_5 \in {\mathbb {Z}}^3 \) instead of \( m_1,m_2,m_3 \in {\mathbb {Z}}^3 \) is convenient once we insert an expression for \( {\mathbb {W}}_t^{\scriptscriptstyle {T},[3]} \). A minor modification of the derivation of (2.52) shows that

$$\begin{aligned} \widehat{{\mathbb {W}}_t^{\scriptscriptstyle {T},[3]}}(m_1) = {\mathcal {I}}[f(\cdot ;t,m_1)], \end{aligned}$$
(2.59)

where the symmetric function \( f(\cdot ;t,m_1) \) is given by

$$\begin{aligned}&f(t_1,n_1,t_2,n_3,t_3,n_3;t,m_1) \\&\quad = 1\{ n_{123}=m_1\} \frac{1}{\langle n_{123} \rangle ^2} {\widehat{V}}_S(n_1,n_2,n_3) \\&\qquad \times \Big ( \int _{\max (t_1,t_2,t_3)}^t \sigma ^{\scriptscriptstyle {T}}_{s}(n_{123})^2 \, {\mathrm {d}}s\Big ) 1\{ 0 \le t_1,t_2,t_3 \le t\}. \end{aligned}$$

Using Lemmas 2.5 and A.2, we obtain that

(2.60)

where the symmetric function \( g(\cdot ;t,m_4,m_5)\) is given by

$$\begin{aligned}&g(t_4,n_4,t_5,n_5) \overset{\text {def}}{=}\frac{1}{2} \Big ( 1\{ (n_4,n_5)= (m_4,m_5)\} + 1\{ (n_4,n_5)\\&\quad =(m_5,m_4)\} \Big ) 1\{ 0 \le t_4,t_5 \le t\}. \end{aligned}$$

The author believes that inserting indicators such as \( 1\{ (n_4,n_5)= (m_4,m_5)\} \) is notationally unpleasant, but it allows us to use the multiple stochastic integrals from [34] without having to “reinvent the wheel”. With this notation, we obtain that

Using Lemma A.4, we obtain that

(2.61)

where the Gaussian chaoses are defined as

$$\begin{aligned} \widetilde{{\mathcal {G}}}_5(t,x)&= \sum _{n_1,\ldots ,n_5 \in {\mathbb {Z}}^3} \frac{{\widehat{V}}(n_{12}) {\widehat{V}}(n_{1234})}{\langle n_{123} \rangle ^2} e^{i \langle n_{12345}, x \rangle } \\&\quad \times \int _{[0,t]^5} \Big ( \int _{\max (t_1,t_2,t_3)}^t \sigma ^{\scriptscriptstyle {T}}_{s}(n_{123})\, {\mathrm {d}}s\Big ) {\mathrm {d}}W_{t_5}^{{\scriptscriptstyle {T}},n_5} \ldots {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_1}, \\ \widetilde{{\mathcal {G}}}_3(t,x)&= \frac{1}{2} \sum _{n_1,\ldots , n_5 \in {\mathbb {Z}}^3} \bigg [ \delta _{n_{35}=0} \frac{{\widehat{V}}_s(n_1,n_2,n_3) }{\langle n_{123}\rangle ^2 \langle n_3 \rangle ^2} \Big ( {\widehat{V}}(n_{12})+{\widehat{V}}(n_{1234}) \Big ) e^{i \langle n_{124}, x \rangle } \\&\quad \times \int _{[0,t]^3} \Big ( \int _0^t \int _{\max (t_1,t_2,t_3)}^t \sigma ^{\scriptscriptstyle {T}}_{t_3}(n_3)^2 \sigma ^{\scriptscriptstyle {T}}_{s}(n_{123})^2 \, {\mathrm {d}}s{\mathrm {d}}t_3 \Big ) {\mathrm {d}}W_{t_4}^{{\scriptscriptstyle {T}},n_4} {\mathrm {d}}W_{t_2}^{{\scriptscriptstyle {T}},n_2} {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_1} \bigg ], \\ \widetilde{{\mathcal {G}}}_1(t,x)&= \frac{1}{4} \sum _{n_1,\ldots ,n_5 \in {\mathbb {Z}}^3} \bigg [ \delta _{n_{24}=n_{35}=0} \frac{{\widehat{V}}_s(n_1,n_2,n_3)}{\langle n_{123}\rangle ^2 \langle n_2 \rangle ^2 \langle n_3 \rangle ^2} \Big ( {\widehat{V}}(n_{12})+{\widehat{V}}(n_{13})\Big ) e^{i \langle n_{1}, x \rangle } \\&\quad \times \int _{[0,t]}\Big ( \int _0^t \int _0^t \int _{\max (t_1,t_2,t_3)}^t \sigma ^{\scriptscriptstyle {T}}_{t_2}(n_2)^2\sigma ^{\scriptscriptstyle {T}}_{t_3}(n_3)^2 \sigma ^{\scriptscriptstyle {T}}_{s}(n_{123})^2 \, {\mathrm {d}}s{\mathrm {d}}t_3 {\mathrm {d}}t_2 \Big ) {\mathrm {d}}W_{t_1}^{{\scriptscriptstyle {T}},n_1} \bigg ]. \end{aligned}$$

This concludes the algebraic aspects of the proof of (2.49). Starting from (2.60), the analytic estimates are essentially as in the proof of the earlier estimate (2.48) and we omit the details. This completes the proof of the lemma. \(\square \)

In the construction of the drift measure (Sect. 4), we need a renormalization of . The term has regularity \( 0-\) and hence the n-th power is almost defined. While we could use iterated stochastic integrals to define the renormalized power, it is notationally convenient to use an equivalent definition through Hermite polynomials. This definition is also closer to the earlier literature in dispersive PDE. We recall that the Hermite polynomials \( \{ H_n(x,\sigma ^2)\}_{n\ge 0} \) are defined through the generating function

$$\begin{aligned} e^{tx - \frac{1}{2} \sigma ^2 t^2} = \sum _{n=0}^\infty \frac{t^n}{n!} H_n(x,\sigma ^2). \end{aligned}$$

Definition 2.22

We define the renormalized n-th power by

(2.62)

We list two basic properties of the renormalized power in the next lemma.

Lemma 2.23

(Stochastic objects IV). We have for all \( n \ge 1 \), \( p \ge 1 \), and \( \epsilon > 0 \) that

(2.63)

Furthermore, we have for all \( f \in H^1_x({\mathbb {T}}^3) \) the binomial formula

(3.1)

Since the proof is standard, we omit the details. For similar arguments, we refer the reader to [36].

4 Construction of the Gibbs measure

The goal of this section is to prove Theorem 1.3. The main ingredient is the Boué-Dupuis formula, which yields a variational formulation of the Laplace transform of \( {\widetilde{\mu }}_{T}\). Our argument follows earlier work of Barashkov and Gubinelli [5], but the convolution inside the nonlinearity requires additional ingredients (see Sects. 3.2 and 3.3).

4.1 The variational problem, uniform bounds, and their consequences

Due to the singularity of the Gibbs measure for \( 0< \beta < 1/2 \), which is the main statement in Theorem 1.5, the construction will require one final renormalization. We recall that \( \lambda > 0 \) denotes the coupling constant in the nonlinearity and we let \( c^{\scriptscriptstyle {T},\lambda }\) be a real-valued constant which remains to be chosen.

For the rest of this section, we let \( \varphi :{\mathcal {C}}_t^0 {\mathcal {C}}_{x}^{-1/2-\kappa }([0,\infty ]\times {\mathbb {R}}) \rightarrow {\mathbb {R}}\) be a functional with at most linear growth. We denote the (non-renormalized) potential energy by

$$\begin{aligned} {\mathcal {V}}(f) \overset{\text {def}}{=}\int _{{\mathbb {T}}^3} (V* f^2)(x) f^2(x) \,{\mathrm {d}}x= \int _{{\mathbb {T}}^3 \times {\mathbb {T}}^3} V(x-y) f(y)^2 f(x)^2 \,{\mathrm {d}}x\,{\mathrm {d}}y. \end{aligned}$$
(3.2)

We denote the renormalized version of \( {\mathcal {V}}(f) \) by

$$\begin{aligned} :{\mathcal {V}}^{\scriptscriptstyle {T},\lambda }(f) :\overset{def}{=} \frac{\lambda }{4} \cdot \int _{{\mathbb {T}}^3} :( V* f^2) f^2 :\,{\mathrm {d}}x+ c^{\scriptscriptstyle {T},\lambda }, \end{aligned}$$
(3.3)

where \( :( V* f^2) f^2 :\) is as in Definition 2.6. To further simplify the notation, we denote for any \( u :[0,\infty ) \times {\mathbb {T}}^3 \rightarrow {\mathbb {R}}\) the space-time \( L^2\)-norm by

$$\begin{aligned} \Vert u \Vert _{{L^2_{t,x}}}^2 \overset{\text {def}}{=}\int _0^\infty \Vert u_t \Vert _{L^2_x({\mathbb {T}}^3)}^2 \, {\mathrm {d}}t. \end{aligned}$$
(3.4)

With this notation, we can now state the main estimate of this section.

Proposition 3.1

(Main estimate for the variational problem). If the renormalization constants \( c^{\scriptscriptstyle {T},\lambda }\) are chosen appropriately, we have that

(3.5)

where

(3.6)

and

$$\begin{aligned} | \Psi ^{\scriptscriptstyle {T},\varphi }_\lambda (W, I[u])| \le Q_T(W,\varphi ,\lambda ) + \frac{1}{2} \Big ( \frac{\lambda }{4} {\mathcal {V}}(I_\infty ^{\scriptscriptstyle {T}}(u)) + \frac{1}{2} \Vert l^{\scriptscriptstyle {T}}[u] \Vert _{{L^2_{t,x}}}^2 \Big ). \end{aligned}$$
(3.7)

Here, \( Q_T(W,\varphi ,\lambda ) \) satisfies for all \( p \ge 1 \) the estimate \( {\mathbb {E}}[Q_T(W,\varphi ,\lambda )^p] \lesssim _p 1 \), where the implicit constant is uniform in \( T \ge 1 \).

The argument of \( \varphi \) in (3.4) is not regularized, that is, we are working with W instead of \( W^{\scriptscriptstyle {T}}\). This is important to obtain control over \( \mu _T\), which is the pushforward of \( {\widetilde{\mu }}_T \) under \( W_\infty \).

Remark 3.2

This is a close analog of [5, Theorem 1]. Due to the smoothing effect of the interaction potential V, however, the shifted drift \( l^{\scriptscriptstyle {T}}[u] \) is simpler. In contrast to the \( \Phi ^4_3 \)-model, the difference \( l^T(u) - u \) does not depend on u. As is evident from the proof, we have that

$$\begin{aligned} \Psi ^{\scriptscriptstyle {T},\varphi }_\lambda (W, I[u]) = \varphi (W+I[u]) + \Psi ^{\scriptscriptstyle {T},0}_\lambda (W, I[u]). \end{aligned}$$
(3.8)

This observation will only be needed in Proposition 3.3 below.

We first record the following proposition, which is a direct consequence of Proposition 3.1 and the Boué-Dupuis formula.

Proposition 3.3

The measures \( {\widetilde{\mu }}_{T}\) satisfy the following properties:

  1. (i)

    The normalization constants \( {\mathcal {Z}}^{\scriptscriptstyle {T},\lambda }\) satisfy \( {\mathcal {Z}}^{\scriptscriptstyle {T},\lambda }\sim _\lambda 1 \), i.e., they are bounded away from zero and infinity uniformly in T.

  2. (ii)

    If the functional \( \varphi :{\mathcal {C}}_t^0 {\mathcal {C}}_x^{-1/2-\kappa }([0,\infty ]\times {\mathbb {T}}^3) \rightarrow {\mathbb {R}}\) has at most linear growth, then

    $$\begin{aligned} \sup _{T\ge 0 } {\mathbb {E}}_{{\widetilde{\mu }}_T}\Big [ \exp \big ( - \varphi (W) \big ) \Big ] \lesssim _\varphi 1. \end{aligned}$$
  3. (iii)

    The family of measures \( ({\widetilde{\mu }}_{T})_{T\ge 0} \) is tight on \( {\mathcal {C}}_t^0 {\mathcal {C}}_x^{-\frac{1}{2}-\kappa }([0,\infty ]\times {\mathbb {T}}^3)\).

Proof of Proposition 3.3

We first prove (i). From the definition of \( \mu _T \), we have that

Using the Boué-Dupuis formula and Proposition 3.1, we have that

From (3.6), we directly obtain that

$$\begin{aligned} - \log ( {\mathcal {Z}}^{\scriptscriptstyle {T},\lambda }) \ge - C_\lambda . \end{aligned}$$
(3.9)

By choosing , which is equivalent to requiring \( l^{\scriptscriptstyle {T}}_t[u] = 0 \) and implies \( I_t^{\scriptscriptstyle {T}}[u] = {\mathbb {W}}_t^{\scriptscriptstyle {T},[3]} \), we obtain from Lemma 2.20 that

$$\begin{aligned} -\log ({\mathcal {Z}}^{\scriptscriptstyle {T},\lambda }) \lesssim _\lambda 1 + {\mathbb {E}}_{{\mathbb {P}}} \Big [ {\mathcal {V}}( \lambda {\mathbb {W}}_t^{\scriptscriptstyle {T},[3]})\Big ] \lesssim _\lambda 1. \end{aligned}$$
(3.10)

By combining (3.8) and (3.9), we obtain that \( {\mathcal {Z}}^{\scriptscriptstyle {T},\lambda }\sim _\lambda 1\).

We now turn to (ii), which controls the Laplace transform of \( {\widetilde{\mu }}_{T}\). Using the Boué-Dupuis formula and Proposition 3.1, we obtain that

$$\begin{aligned}&- \log \Big ( {\mathbb {E}}_{{\widetilde{\mu }}_T}\Big [ \exp \big ( - \varphi (W) \big ) \Big ] \Big ) \\&\quad = \log ({\mathcal {Z}}^{\scriptscriptstyle {T},\lambda }) + \inf _{u \in {\mathbb {H}}_a} {\mathbb {E}}_{{\mathbb {P}}} \bigg [ \Psi ^{\scriptscriptstyle {T},\varphi }_\lambda (W, I[u]) + \frac{\lambda }{4} {\mathcal {V}}(I_\infty ^{\scriptscriptstyle {T}}(u)) + \frac{1}{2} \Vert l^{\scriptscriptstyle {T}}[u] \Vert _{{L^2_{t,x}}}^2 \bigg ]. \end{aligned}$$

The first summand \( \log ({\mathcal {Z}}^{\scriptscriptstyle {T},\lambda }) \) has already been controlled. The second summand can be controlled using exactly the same estimates.

We finally prove (iii). Let \( \alpha ,\eta >0 \) be sufficiently small depending on \( \kappa \). Since the embedding \( {\mathcal {C}}_t^{\alpha ,\eta } {\mathcal {C}}_x^{-\frac{1+\kappa }{2}} \hookrightarrow {\mathcal {C}}_t^0 {\mathcal {C}}_x^{-\frac{1}{2}-\kappa } \) is compact (see (1.18) for the definition), it suffices to estimate the Laplace transform evaluated at

$$\begin{aligned} \varphi (W) = - \Vert W \Vert _{ {\mathcal {C}}_t^{\alpha ,\eta } {\mathcal {C}}_x^{-\frac{1+\kappa }{2}}}. \end{aligned}$$
(3.11)

While this is not a functional on \( {\mathcal {C}}_t^0 {\mathcal {C}}_x^{-\frac{1}{2}-\kappa } \), we can proceed using a minor modification of the previous estimates. Using Proposition 3.1 and (3.7), it suffices to prove

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {P}}} \big [ \Vert W \Vert _{ {\mathcal {C}}_t^{\alpha ,\eta } {\mathcal {C}}_x^{-\frac{1+\kappa }{2}}} \big ] \lesssim 1 \quad \text {and} \quad \Vert I_t[u] \Vert _{ {\mathcal {C}}_t^{\alpha ,\eta } {\mathcal {C}}_x^{-\frac{1+\kappa }{2}}} \lesssim \Vert u \Vert _{L_{t,x}^2}. \end{aligned}$$
(3.12)

The first estimate follows from Kolmogorov’s continuity theorem (cf. [40, Theorem 4.3.2]). The second estimate is deterministic and follows from Sobolev embedding and Lemma B.4. \(\square \)

Using Proposition 3.3, we easily obtain Theorem 1.3.

Proof of Theorem 1.3

The tightness is included in Proposition 3.3. The weak convergence of the sequence \((\mu _N)_{N\ge 1}\) follows from tightness and the uniqueness of weak subsequential limits (Proposition C.1). \(\square \)

We also record the following consequence of the proof of Proposition 3.1, which will play an important role in Sect. 5. The proof of this result will be postponed until Sect. 3.4.

Corollary 3.4

(Behavior of \(c^{\scriptscriptstyle {T},\lambda }\)). If \( \beta > 1/2 \), then we have for all \( \lambda >0 \) that

$$\begin{aligned} \sup _{T\ge 1} |c^{\scriptscriptstyle {T},\lambda }| \lesssim _\lambda 1. \end{aligned}$$
(3.13)

Proposition 3.1 is the most challenging part in the construction of the measure and the proof will be distributed over the remainder of this subsection.

4.2 Visan’s estimate and the cubic terms

In the variational problem, the potential energy \( {\mathcal {V}}(I_\infty ^{\scriptscriptstyle {T}}[u]) \) appears with a favorable sign. This is crucial to control the terms in which are cubic in \( I_\infty ^{\scriptscriptstyle {T}}[u]\) and hence cannot be controlled by the quadratic terms \( \Vert u\Vert _{L^2}^2 \) or \( \Vert l^T(u)\Vert _{L^2}^2 \). In the \( \Phi ^4_3\)-model, the potential energy term \( \Vert I_\infty ^{\scriptscriptstyle {T}}[u]\Vert _{L^4}^4 \) is both stronger and easier to handle. While we cannot change the strength of \( {\mathcal {V}}(I_\infty ^{\scriptscriptstyle {T}}[u]) \), Lemma 3.5 solves the algebraic difficulties.

Due to the assumed lower-bound on V, we first note that

$$\begin{aligned} \Vert f \Vert _{L^2_x({\mathbb {T}}^3)}^4 = \Vert f^2 \Vert _{L^1_x({\mathbb {T}}^3)}^2 \lesssim \int _{{\mathbb {T}}^3 \times {\mathbb {T}}^3} V(x-y) f(y)^2 f(x)^2 \,{\mathrm {d}}x\,{\mathrm {d}}y= {\mathcal {V}}(f). \end{aligned}$$

Since at high-frequencies the kernel of \( \langle \nabla \rangle ^{-\beta } \) essentially behaves like \( |x-y|^{-(3-\beta )} \), we also obtain that

$$\begin{aligned} \Vert \langle \nabla \rangle ^{-\frac{\beta }{2}}[f^2] \Vert _{L^2({\mathbb {T}}^3)}^2= & {} \langle \big ( \langle \nabla \rangle ^{-\beta } f^2\big ) , f^2 \rangle _{L^2_x({\mathbb {T}}^3)} \nonumber \\\lesssim & {} \int _{{\mathbb {T}}^3 \times {\mathbb {T}}^3} V(x-y) f(y)^2 f(x)^2 \,{\mathrm {d}}x\,{\mathrm {d}}y= {\mathcal {V}}(f). \end{aligned}$$
(3.14)

Unfortunately, the square of f is inside the integral operator \( \langle \nabla \rangle ^{-\frac{\beta }{2}} \), which makes it difficult to use this estimate. The next lemma yields a much more useful lower bound on \( {\mathcal {V}}(f) \).

Lemma 3.5

(Visan’s estimate). Let \( 0< \beta < 3 \) and \( f\in C^\infty ({\mathbb {T}}^3) \). Then, it holds that

$$\begin{aligned} \Vert \langle \nabla \rangle ^{-\frac{\beta }{4}} f \Vert _{L^4_x({\mathbb {T}}^3)}^4 \lesssim {\mathcal {V}}(f). \end{aligned}$$
(3.15)

This estimate is a minor modification of [41, (5.17)] and we omit the details. We now turn to the primary application of Visan’s estimate in this work.

Lemma 3.6

(Cubic estimate). For any small \( \delta >0\) and any \( \frac{1+2\delta }{2} < \theta \le 1 \), it holds that

$$\begin{aligned} \Big \Vert \langle \nabla \rangle ^{\frac{1}{2}+\delta } \Big ( (V* f^2) f \Big ) \Big \Vert _{L^1_x({\mathbb {T}}^3)} \lesssim {\mathcal {V}}(f)^{\frac{1}{2}} \Vert f\Vert _{L^2_x({\mathbb {T}}^3)}^{1-\theta } \Vert f \Vert _{H^1_x({\mathbb {T}}^3)}^{\theta }. \end{aligned}$$
(3.16)

Proof

We use a Littlewood-Paley decomposition to write

$$\begin{aligned} (V*f^2) f = \sum _{M,N_3} P_M \big ( V*f^2\big ) \cdot P_{N_3} f. \end{aligned}$$

We first estimate the contribution for \( N_3 > rsim M \). We have that

$$\begin{aligned}&\sum _{\begin{array}{c} M,N_3:N_3 > rsim M \end{array}} \big \Vert \langle \nabla \rangle ^{\frac{1}{2}+\delta } \Big ( P_M \big ( V*f^2\big ) \cdot P_{N_3} f \Big ) \big \Vert _{L^1_x} \\&\quad \lesssim \sum _{\begin{array}{c} M,N_3 :N_3 > rsim M \end{array}} N_3^{\frac{1}{2}+\delta } \Vert P_M( V * f^2) \Vert _{L^2_x} \Vert P_{N_3} f \Vert _{L^2_x} \\&\quad \lesssim \Big ( \sum _{\begin{array}{c} M,N_3 :N_3 > rsim M \end{array}} N_3^{\frac{1}{2}+\delta } M^{-\frac{\beta }{2}} N_3^{-\theta } \Big ) \Vert \langle \nabla \rangle ^{-\frac{\beta }{2}} f^2 \Vert _{L^2_x} \Vert f\Vert _{L^2_x}^{1-\theta } \Vert f \Vert _{H^1_x}^{\theta } \\&\quad \lesssim \Vert \langle \nabla \rangle ^{-\frac{\beta }{2}} f^2 \Vert _{L^2_x} \Vert f\Vert _{L^2_x}^{1-\theta } \Vert f \Vert _{H^1_x}^{\theta } . \end{aligned}$$

Due to (3.13), this contribution is acceptable. Next, we estimate the contribution of \( N_3 \lesssim M \). We further decompose

$$\begin{aligned} f^2 = \sum _{N_1,N_2} P_{N_1} f \cdot P_{N_2} f. \end{aligned}$$

Then, the total contribution can be bounded using Hölder’s inequality and Fourier support considerations by

$$\begin{aligned}&\sum _{\begin{array}{c} N_1,N_2,N_3,M:\\ N_3 \lesssim M \le \max (N_1,N_2) \end{array}} \Big \Vert \langle \nabla \rangle ^{\frac{1}{2}+\delta } \Big ( P_M\big ( V * (P_{N_1} f \cdot P_{N_2} f) \big ) \cdot P_{N_3} f\Big ) \Big \Vert _{L^1_x} \\&\quad \lesssim \sum _{\begin{array}{c} N_1,N_2,N_3,M:\\ N_3 \lesssim M \le \max (N_1,N_2) \end{array}}M^{\frac{1}{2}+\delta } \Vert P_M\big ( V * (P_{N_1} f\cdot P_{N_2} f ) \big ) \Vert _{L_x^{\frac{4}{3}}} \Vert P_{N_3} f \Vert _{L^4_x} \\&\quad \lesssim \sum _{\begin{array}{c} N_1,N_2,M:\\ N_3 \lesssim M \le \max (N_1,N_2) \end{array}}M^{\frac{1}{2}+\delta -\beta } N_3^{\frac{\beta }{4}} \Vert P_{N_1} f\cdot P_{N_2} f \Vert _{L_x^{\frac{4}{3}}} \Vert P_{N_3} \langle \nabla \rangle ^{-\frac{\beta }{4}} f \Vert _{L^4_x} \\&\quad \lesssim \Big ( \sum _{\begin{array}{c} N_1,N_2,M:\\ N_1 \ge M,N_2 \end{array}} M^{\frac{1}{2}+\delta -\frac{3\beta }{4}} N_1^{-\theta } N_2^{\frac{\beta }{4}} \Big ) \Vert \langle \nabla \rangle ^{-\frac{\beta }{4}} f \Vert _{L^4_x}^2 \Vert f \Vert _{L^2_x}^{1-\theta } \Vert f \Vert _{H^1_x}^{\theta } \\&\quad \lesssim \Vert \langle \nabla \rangle ^{-\frac{\beta }{4}} f \Vert _{L^4_x}^2 \Vert f \Vert _{L^2_x}^{1-\theta } \Vert f \Vert _{H^1_x}^{\theta }. \end{aligned}$$

In the last line, it is simplest to first perform the sum in \( N_2 \), then in \( N_1 \), and finally in M. \(\square \)

4.3 A random matrix estimate and the quadratic terms

In the proof of Proposition 3.1, we will encounter expressions such as

(3.17)

This term no longer involves an explicit stochastic object, such as , at a single point \( x \in {\mathbb {T}}^3 \). By expanding the convolution, we can capture stochastic cancellations in terms of two spatial variables \( x \in {\mathbb {T}}^3 \) and \( y\in {\mathbb {T}}^3\), which has already been studied in Lemma 2.19. The most natural way to capture stochastic cancellations in (3.16), however, is through random operator bounds. This is the object of the next lemma.

Proposition 3.7

(Random matrix estimate). Let \( \gamma > \max (1-\beta ,1/2) \) and let \( 1 \le r \le \infty \). We define

Then, we have for all \( 1 \le p <\infty \) that

$$\begin{aligned} \sup _{T,t\ge 0} \Vert {\text {Op}}^T_t(\gamma ,r)\Vert _{L^p_\omega (\Omega )} \lesssim p. \end{aligned}$$
(3.18)

Remark 3.8

Aside from Fourier support considerations, the proof below mainly proceeds in physical space. If \( r=2 \), an alternative approach is to view \( {\text {Op}}^T_t(\gamma ,2)\) as the operator norm of a random matrix acting on the Fourier coefficients. Using a non-trivial amount of combinatorics, one can then bound \( {\text {Op}}^T_t(\gamma ,2)\) using the moment method (see also [14, Proposition 2.8]). This alternative approach is closer to the methods in the literature on random dispersive equations but more complicated. The estimate for \( r \ne 2 \), which is not needed in this paper, is useful in the study of the stochastic heat equation with Hartree nonlinearity.

Proof

Since this will be important in the proof, we now indicate the dependence of the multiplier on the interaction potential by writing \( {\mathcal {M}}^{\scriptscriptstyle {T}}_t[V] \). We use a Littlewood-Paley decomposition of and \( f_2 \). We then have that

To control this sum, we first define a frequency-localized version of \( {\text {Op}}^T_t(\gamma ,r)\) by

We emphasize the change from \( {\mathbb {W}}_x^{\gamma ,r}({\mathbb {T}}^3) \) to \( L^r_x({\mathbb {T}}^3) \), which simplifies the notation below. By proving the estimate for a slightly smaller \( \gamma \), (3.17) reduces to

$$\begin{aligned} \sup _{T,t\ge 0} \Vert {\text {Op}}^T_t(r;K_1,K_2,N_1,N_2)\Vert _{L^p_\omega (\Omega )} \lesssim p (N_1 N_2)^{-\delta } (K_1 K_2)^\gamma . \end{aligned}$$
(3.19)

By using Lemmas 2.16 and  2.19, it suffices to prove for a small \( \delta >0 \) that

(3.20)

By interpolation, we can further reduce to \( r=1 \) or \( r=\infty \). Using the self-adjointness of the convolution with V and the multiplier \( {\mathcal {M}}^{\scriptscriptstyle {T}}_t[V;N_1,N_2] \), it suffices to take \( r=1 \). We now separate the cases \( N_1 \sim N_2 \) and \( N_1 \not \sim N_2 \).

Case 1: \( N_1 \not \sim N_2 \). This is the easier (but slightly tedious) case and it does not contain any probabilistic resonances. We note that \( {\mathcal {M}}^{\scriptscriptstyle {T}}_t[V;N_1,N_2]=0 \) and hence we only need to control the convolution term. From Fourier support considerations, we also see that this term vanishes unless \( \max (K_1,K_2) > rsim \max (N_1,N_2) \). While our conditions on \( f_1 \) and \( f_2 \) are not completely symmetric and we already used the self-adjointness to restrict to \( r=1 \), we only treat the case \( K_1 > rsim K_2 \). Since our proof only relies on Hölder’s inequality and Young’s inequality, the case \( K_1 \lesssim K_2 \) can be treated similarly. We now estimate

We now split the last sum into the cases \( L \ll N_2 \) and \( N_2 \lesssim L \lesssim K_1 \). If \( L \ll N_2 \), we only obtain a non-zero contribution when \( N_2 \sim K_2 \). Thus, the corresponding contribution is bounded by

In the last line, we also used \( N_1 \lesssim K_1 \) and \( \gamma > 1/2 \). If \( L > rsim N_2 \), we simply estimate

provided that \( \gamma > \max ( 1-\beta , 1/2 ) \). This completes the estimate in Case 1, i.e., \( N_1 \not \sim N_2 \).

Case 2: \( N_1 \sim N_2 \). This is the more difficult case. Guided by the uncertainty principle, we decompose the interaction potential by writing \( V = P_{\ll N_1} V + P_{ > rsim N_1} V \). Using the linearity of the multiplier \( {\mathcal {M}}^{\scriptscriptstyle {T}}_t[V;N_1,N_2] \) in V, we decompose

We now split the proof into two subcases corresponding to the contributions of \( P_{\ll N_1} V \) and \( P_{ > rsim N_1} V \).

Case 2.a: \( N_1 \sim N_2\), contribution of \( P_{\ll N_1} V \). Similar as in Case 1, we do not rely on any cancellation between the convolution term and its renormalization. As a result, we estimates both terms separately.

We first estimate the convolution term. Due to the convolution with \( P_{\ll N_1} V \), we only obtain a non-zero contribution if \( N_1 \sim K_1 \). Using \( N_1 \sim N_2 \) in the second inequality below, we obtain that

Second, we turn to the multiplier term. From the definition of \( {\mathcal {M}}^{\scriptscriptstyle {T}}_t[P_{\ll N_1} V; N_1,N_2] \) (see Definition 2.13), we see that the corresponding symbol is supported on frequencies \( |n|\sim N_1 \). As a result, we only obtain a non-zero contribution if \( K_1 \sim K_2 \sim N_1 \). Using Lemma 2.15, Hölder’s inequality, Young’s inequality, and the trivial estimate \(\Vert {\mathfrak {C}}^T_t[N_1,N_2]\Vert _{L_x^\infty } \lesssim N_1\), we obtain

$$\begin{aligned}&\Big |\int _{{\mathbb {T}}^3} \big ( {\mathcal {M}}^{\scriptscriptstyle {T}}_t[P_{\ll N_1} V;N_1,N_2] P_{K_1}f_1 \big )P_{K_2} f_2 \,{\mathrm {d}}x\Big |\\&\quad = 1\{K_1\sim K_2 \sim N_1\} \Big | \int _{{\mathbb {T}}^3} \Big ( ( {\mathfrak {C}}^T_t[N_1,N_2]P_{\ll N_1} V ) * P_{K_1} f_1 \Big ) P_{K_2} f_2 \,{\mathrm {d}}x\Big | \\&\quad \lesssim 1\{K_1\sim K_2 \sim N_1\} \Vert ( {\mathfrak {C}}^T_t[N_1,N_2]P_{\ll N_1} V ) * P_{K_1} f_1\Vert _{L_x^1} \Vert P_{K_2} f_2 \Vert _{L_x^\infty } \\&\quad \lesssim 1\{K_1 \sim K_2\sim N_1\} \Vert {\mathfrak {C}}^T_t[N_1,N_2]P_{\ll N_1} V\Vert _{L_x^1} \Vert f_1\Vert _{L_x^1} \Vert f_2 \Vert _{L_x^\infty } \\&\quad \lesssim 1\{K_1 \sim K_2 \sim N_1\} \Vert {\mathfrak {C}}^T_t[N_1,N_2]\Vert _{L_x^\infty } \Vert V \Vert _{L_x^1} \\&\quad \lesssim 1\{K_1 \sim K_2 \sim N_1\} N_1 \lesssim (N_1 N_2)^{-\delta } (K_1 K_2)^\gamma . \end{aligned}$$

This completes the estimate of the contribution from \( P_{\ll N_1} V \).

Case 2.b: \( N_1 \sim N_2 \), contribution of \( P_{\gg N_1} V \). The estimate for this case relies on the cancellation between the convolution and multiplier term, i.e., the renormalization. One important ingredient lies in the estimate \( \Vert P_{\gg N_1} V \Vert _{L_x^1} \lesssim N_1^{-\beta } \), which yields an important gain.

Using the translation operator \( \tau _y \), we rewrite the convolution term as

Using Lemma 2.15, we obtain that

$$\begin{aligned}&\int _{{\mathbb {T}}^3} \big ( {\mathcal {M}}^{\scriptscriptstyle {T}}_t[P_{ > rsim N_1} V;N_1,N_2] P_{K_1}f_1 \big )P_{K_2} f_2 \,{\mathrm {d}}x\\&\quad = \int _{{\mathbb {T}}^3} \Big ( \big ( {\mathfrak {C}}^T_t[N_1,N_2]P_{\gg N_1} V \big ) * P_{K_1} f_1\Big )(x) P_{K_2} f_2(x) \,{\mathrm {d}}x\\&\quad = \int _{{\mathbb {T}}^3} P_{\gg N_1} V(y) \bigg [ \int _{{\mathbb {T}}^3} \big (\tau _y P_{K_1} f_1\, P_{K_2} f_2 \big )(x) {\mathfrak {C}}^T_t[N_1,N_2](y) \,{\mathrm {d}}x\bigg ] \,{\mathrm {d}}y. \end{aligned}$$

By recalling Definition 2.14 and combining both identities, we obtain that

Using that is supported on frequencies \( \lesssim N_1 \), we obtain that

This completes the estimate of the contribution from \( P_{\gg N_1} V \) and hence the proof of the proposition. \(\square \)

4.4 Proof of Proposition 3.1 and Corollary 3.4

In this subsection, we reap the benefits of our previous work and prove the main results of this section.

Proof

In this proof, we treat \( Q_T= Q_T(W,\varphi ,\lambda ) \) like an implicit constant and omit the dependence on \( W,\varphi \), and \( \lambda \). In particular, its precise definition may change throughout the proof.

From the quartic binomial formula (Lemma 2.11), it follows that

We have grouped the terms according to their importance and their degree in \( I_\infty ^{\scriptscriptstyle {T}}[u] \). The first line consists of the main terms, whereas the second and third line consist of less important terms of increasing degree in \( I_\infty ^{\scriptscriptstyle {T}}[u] \). We will split them further in (3.23)–(3.26) below and introduce notation for the individual terms.

Since has regularity \( \min (- \frac{3}{2}+\beta , -\frac{1}{2})-\) and \( I_\infty ^{\scriptscriptstyle {T}}[u] \) has regularity 1, the term

is potentially unbounded as \( T \rightarrow \infty \). As in [5], we absorb it into the quadratic term \( \frac{1}{2} \Vert u\Vert _{L^2}^2 \). To this end, we want to remove the integral in \( I_\infty ^{\scriptscriptstyle {T}}[u] \) and obtain an expression in the drift u. From Itô’s formula, it holds that

The second term is a martingale (in the upper limit of integration) and therefore has expectation equal to zero. Together with the self-adjointness of \( J_t \), it follows that

where \( l^T[u] \) is as in (3.5). To simplify the notation, we write

(3.21)

With \( {\mathbb {W}}_t^{\scriptscriptstyle {T},[3]} \) as in (2.46), it follows that

$$\begin{aligned} I_t^{\scriptscriptstyle {T}}[w] = I_t^{\scriptscriptstyle {T}}[u] + \lambda {\mathbb {W}}_t^{\scriptscriptstyle {T},[3]}. \end{aligned}$$
(3.22)

By inserting this back into the quartic binomial formula, we obtain that

(3.23)

where the “error” terms \( {\mathcal {E}}_j \), with \( j=0,1,2,3\), are given by

(3.24)
(3.25)
(3.26)
(3.27)

Since \( {\mathcal {E}}_0\) does not depend on w, we can define

$$\begin{aligned} c^{\scriptscriptstyle {T},\lambda }\overset{\text {def}}{=}-{\mathbb {E}}_{{\mathbb {P}}} \big [ {\mathcal {E}}_0 \big ]. \end{aligned}$$
(3.28)

The behavior of \( c^{\scriptscriptstyle {T},\lambda }\) as \( T \rightarrow \infty \) is irrelevant for the rest of the proof. However, it determines whether the Gibbs measure is singular or absolutely continuous with respect to the Gaussian free field (see Sect. 5). From the estimates (B.3) and (B.4), it is easy to see that

$$\begin{aligned}&-\,Q_T + \frac{1}{2} \Big ( \frac{\lambda }{4} {\mathcal {V}}(I_\infty ^{\scriptscriptstyle {T}}[u]) + \frac{1}{2} \Vert w \Vert _{{L^2_{t,x}}}^2 \Big )\le \frac{\lambda }{4} {\mathcal {V}}(I_\infty ^{\scriptscriptstyle {T}}[w]) + \frac{1}{2} \Vert w \Vert _{{L^2_{t,x}}}^2 \\&\quad \le Q_T + 2 \Big ( \frac{\lambda }{4} {\mathcal {V}}(I_\infty ^{\scriptscriptstyle {T}}[u]) \\&\quad +\, \frac{1}{2} \Vert w \Vert _{{L^2_{t,x}}}^2 \Big ). \end{aligned}$$

Thus, it suffices to bound the terms in \( {\mathcal {E}}_1, {\mathcal {E}}_2, \) and \( {\mathcal {E}}_3 \) pointwise by

$$\begin{aligned} Q_T + \frac{1}{8} \Big ( \frac{\lambda }{4} {\mathcal {V}}(I_\infty ^{\scriptscriptstyle {T}}[w]) + \frac{1}{2} \Vert w \Vert _{{L^2_{t,x}}}^2 \Big ). \end{aligned}$$

We treat the individual summands separately.

Contribution of \({\mathcal {E}}_1\): For the first summand in \( {\mathcal {E}}_1 \), the linear growth of \( \varphi \), Sobolev embedding, a minor modification of (2.47), and Lemma B.4 imply that

(4.1)

For the second summand in \( {\mathcal {E}}_1\), we have from Lemma 2.20 that

For the third summand in \( {\mathcal {E}}_1 \), we have from Lemmas 2.20 and  B.4 that

Contribution of \( {\mathcal {E}}_2 \): For the first summand in \( {\mathcal {E}}_2\), the random matrix estimate (Proposition 3.7) implies for every \( 0< \gamma < \min (\beta ,\frac{1}{2}) \) that

The second summand in \( {\mathcal {E}}_2\) can easily be controlled using Lemma 2.16.

Contribution of \( {\mathcal {E}}_3 \): We estimate the first summand in \( {\mathcal {E}}_3 \) by

In the second factor, we bound the contribution of \( ( V* I_\infty ^{\scriptscriptstyle {T}}[w]^2) I_\infty ^{\scriptscriptstyle {T}}[w] \) using Lemma 3.6. In contrast, the terms containing at least one factor of \( {\mathbb {W}}_t^{\scriptscriptstyle {T},[3]} \) can be controlled using Lemma 2.20, (B.3) and (B.4). This leads to

The second summand in \( {\mathcal {E}}_3 \) can be controlled using the same (or simpler) arguments. \(\square \)

Based on the proof of Proposition 3.1, we can also determine the behavior as \( T \rightarrow \infty \) of the renormalization constants \( c^{\scriptscriptstyle {T},\lambda }\). In particular, we obtain a short proof of Corollary 3.4.

Proof of Corollary 3.4

We let \( \beta > 1/2 \) and choose any \( 1/2< \gamma < \min ( \beta ,1) \). Using the definition of \( c^{\scriptscriptstyle {T},\lambda }\) in (3.27), it remains to control the expectation of \( {\mathcal {E}}_0 \), which is defined in (3.23). We treat the four terms in \( {\mathcal {E}}_0 \) separately.

The first term has zero expectation by Proposition 2.9. For the second term, we obtain from Corollary 2.18 that

For the third term, we obtain from Lemmas 2.16 and  2.20 that

For the fourth term, we obtain from Lemma 2.20 and the random matrix estimate (Proposition 3.7) that

This completes the argument. \(\square \)

5 The reference and drift measures

In this section, we prove Theorem 1.4, which contains information regarding the reference measures. In this paper, we will use the reference measure \(\nu _\infty \) to prove the singularity of the Gibbs measure (Theorem 1.5). In the second part of this series, the reference measures will play an essential role in the probabilistic local well-posedness theory.

As in previous sections, we replace the truncation parameter N by T. Due to its central importance, let us provide an informal description of the terms in the representation of \(\nu _T\). The first summand follows the distribution of the Gaussian free field, which has independent Fourier coefficients and regularity \( -1/2-\). The second summand is a cubic Gaussian chaos with regularity \( {\min (1/2 + \beta ,1)-} \). Finally, the third summand is a Gaussian chaos of order n with regularity \( 5/2-\).

The statement of Theorem 1.4 is concerned with measures on \( {\mathcal {C}}_x^{-1/2-\kappa }({\mathbb {T}}^3) \). At this point, it should not be surprising to the reader that the proof mostly uses the lifted measures \( {\widetilde{\mu }}_{T}\) and \( {\widetilde{\mu }}_\infty \). We will construct a reference measure \( {\mathbb {Q}}^{u}_{T}\) for \( {\widetilde{\mu }}_{T}\), and the reference measure \( \nu _{T}\) will be given by the pushforward of \( {\mathbb {Q}}^{u}_{T}\) under \( W_\infty \). Since the main tool in the construction of \( {\mathbb {Q}}^{u}_{T}\) is Girsanov’s theorem, we call \( {\mathbb {Q}}^{u}_{T}\) the drift measure. This section is a modification of the arguments in Barashkov and Gubinelli’s paper [6]. Since \( l^{\scriptscriptstyle {T}}[u] \) in Proposition 3.1 is simpler than in the \( \Phi ^4_3\)-model, however, we obtain slightly stronger results. For instance, we prove \( L^q\)-bounds for the density \( D_{T}\) in (4.23), whereas the analogous density in [6] only satisfies “local” \( L^q\)-bounds.

5.1 Construction of the drift measure

We define the forcing term

(4.2)

where n is a large odd integer depending on \( \beta \). The first summand in (4.1) is the main term. The second summand in (4.1) yields necessary coercivity in the proof of Lemma 4.3 and Proposition 4.7, but can be safely ignored for most of the argument. We define the drift \( u^{\scriptscriptstyle {T}}\) through the integral equation

(4.3)

We also define the drift u, which does not contain any regularization in the interaction, by

$$\begin{aligned} u_t= & {} - \lambda J_t \Big ( :(V*(W_t - I_t[u])^2) (W_t - I_t[u]) :\Big ) \nonumber \\&\quad + J_t \langle \nabla \rangle ^{-\frac{1}{2}} :\Big ( \langle \nabla \rangle ^{-\frac{1}{2}} \big ( W_t - I_t[u]\big )\Big )^{n} :. \end{aligned}$$
(4.4)

Using the binomial formulas (Lemmas 2.11 and 2.23), we see that the integral equation has smooth coefficients on every compact subset of \( [0,\infty ) \times {\mathbb {T}}^3 \). As a result, it can be solved locally in time using standard ODE-theory. Due to the polynomial nonlinearity, however, we will need to rule out finite-time blowup. To this end, we introduce the blow-up time \( T_{\exp }[u^{\scriptscriptstyle {T}}] \in (0,\infty ] \), which we will later show to be infinite almost surely with respect to both \( {\mathbb {P}}\) and \( {\mathbb {Q}}^{u}_{T}\). The reason is that the highest-degree term in (4.2), which is given by \( - J_t^{\scriptscriptstyle {T}}\langle \nabla \rangle ^{-1/2} ( \langle \nabla \rangle ^{-1/2} I_t^{\scriptscriptstyle {T}}[u^{\scriptscriptstyle {T}}])^{n} \), is defocusing. We also introduce the stopping time

$$\begin{aligned} \tau _{\scriptscriptstyle {T},\scriptscriptstyle {N}}\overset{\text {def}}{=}\inf \Big \{ t \in [0,\infty ) :\int _0^t \Vert u^{\scriptscriptstyle {T}}_s\Vert _{L^2_x}^2 \, {\mathrm {d}}s= N \Big \}. \end{aligned}$$
(4.5)

From the integral equation, it is clear that \( u^{\scriptscriptstyle {T}}_t(\cdot ) \) is supported in frequency space on the finite set \( \{ n\in {\mathbb {Z}}^3:\Vert n\Vert \lesssim \langle t \rangle \} \). As a result, the \( L_t^2 L_x^2\)-norm can be used as a blow-up criterion and the solution \( u^{\scriptscriptstyle {T}}_t\) exists for all times \( t \le \tau _{\scriptscriptstyle {T},\scriptscriptstyle {N}}\), i.e., \( T_{\exp }[u^{\scriptscriptstyle {T}}]>\tau _{\scriptscriptstyle {T},\scriptscriptstyle {N}}\). We then define the truncated solution by

$$\begin{aligned} u^{\scriptscriptstyle {T},\scriptscriptstyle {N}}_t\overset{\text {def}}{=}1\{t\le \tau _{\scriptscriptstyle {T},\scriptscriptstyle {N}}\}\, u^{\scriptscriptstyle {T}}_t. \end{aligned}$$
(4.6)

From the definition of \( \tau _{\scriptscriptstyle {T},\scriptscriptstyle {N}}\), it follows that

$$\begin{aligned} \int _0^\infty \Vert u^{\scriptscriptstyle {T},\scriptscriptstyle {N}}_s\Vert _{L^2_x}^2 \, {\mathrm {d}}s\le N. \end{aligned}$$

Thus, \( u^{\scriptscriptstyle {T},\scriptscriptstyle {N}}\) satisfies Novikov’s condition and we can define the shifted probability measure \( {\mathbb {Q}}^{u}_{T,N}\) by

$$\begin{aligned} \frac{{\mathrm {d}}{\mathbb {Q}}^{u}_{T,N}~}{{\mathrm {d}}{\mathbb {P}}~ } = \exp \Big ( \int _0^\infty \int _{{\mathbb {T}}^3} u^{\scriptscriptstyle {T},\scriptscriptstyle {N}}_s{\mathrm {d}}B_s - \frac{1}{2} \int _0^\infty \Vert u^{\scriptscriptstyle {T},\scriptscriptstyle {N}}_s\Vert _{L^2}^2 \, {\mathrm {d}}s\Big ). \end{aligned}$$
(4.7)

Here, the \( L^2_x\)-pairing in the integral \( \int _0^\infty \int _{{\mathbb {T}}^3} u^{\scriptscriptstyle {T},\scriptscriptstyle {N}}_s{\mathrm {d}}B_s \) is implicit, i.e.,

$$\begin{aligned} \int _0^\infty \int _{{\mathbb {T}}^3} u^{\scriptscriptstyle {T},\scriptscriptstyle {N}}_s{\mathrm {d}}B_s = \int _0^\infty \langle u^{\scriptscriptstyle {T},\scriptscriptstyle {N}}_s, {\mathrm {d}}B_s \rangle _{L^2_x({\mathbb {T}}^3)} = \sum _{\begin{array}{c} n_1,n_2\in {\mathbb {Z}}^3:\\ n_1+n_2=0 \end{array}} \int _0^\infty {\widehat{u^{\scriptscriptstyle {T},\scriptscriptstyle {N}}_s}}(n_1) \, {\mathrm {d}}B_s^{n_2}. \end{aligned}$$

We emphasize that the stochastic integral \( \int _0^\infty \int _{{\mathbb {T}}^3} u^{\scriptscriptstyle {T},\scriptscriptstyle {N}}_s{\mathrm {d}}B_s \) only depends on the Brownian process B through the Gaussian process W. This is important in order to view \( {\mathbb {Q}}^{u}_{T,N}\) as a measure on \( {\mathcal {C}}_t^0 {\mathcal {C}}_x^{-1/2-\kappa }([0,\infty ]\times {\mathbb {T}}^3) \) without changing the expression for the density. To make this direct dependence on W clear, we note that \( u^{\scriptscriptstyle {T}}\) and hence \( \tau _{\scriptscriptstyle {T},\scriptscriptstyle {N}}\) are functions of \( W^{\scriptscriptstyle {T}}\), and hence W, directly from their definition. By using the definition of \( u^{\scriptscriptstyle {T}}\), the self-adjointness of \( J_t^{\scriptscriptstyle {T}}\), and , we obtain that

The expression on the right-hand side clearly is a function of \( W^{\scriptscriptstyle {T}}\) and hence W. With a slight abuse of notation, we will keep writing the integral with respect to \( {\mathrm {d}}B_s \), since it is more compact.

By Girsanov’s theorem, the process

(4.8)

is a cylindrical Brownian motion under \( {\mathbb {Q}}^{u}_{T,N}\). In particular, the law of under \( {\mathbb {Q}}^{u}_{T,N}\) coincides with the law of \( B_t \) under \( {\mathbb {P}}\). As a consequence, the process

(4.9)

satisfies

(4.10)

To avoid confusion, let us remark on a technical detail. In the definition (4.8), the drift \( u^{\scriptscriptstyle {T},\scriptscriptstyle {N}}_s\) is supported on frequencies \( |n|\lesssim \langle T \rangle \). The right-hand side of (4.8), however, does not contain a further frequency projection. In particular, W and hence contain arbitrarily high frequencies. This is related to the definition of the truncated Gibbs measure \( \mu _T\), where the density only depends on frequencies \( \lesssim \langle T \rangle \), but whose samples contain arbitrarily high frequencies. Put differently, we regularize the interaction but not the samples themselves. To make notational matters even worse, while contains all frequencies, we will often work with , which only contains frequencies \( \lesssim \langle T \rangle \). Similar as in Sect. 2.1, we define the truncated process by

(4.11)

Due to the integral equation (4.2), we have that

(4.12)

We intend to use \( {\mathbb {Q}}^{u}_{T,N}\) (and the limit as \( N \rightarrow \infty \)) as a reference measure for \( {\widetilde{\mu }}_{T}\). Due to (4.9), the law of under \( {\mathbb {Q}}^{u}_{T,N}\) does not depend on N. In our estimates of \( u^{\scriptscriptstyle {T},\scriptscriptstyle {N}}_t\) through the integral equation, it is therefore natural to view as given. Under this perspective, the right-hand side of (4.11) no longer depends on \( u^{\scriptscriptstyle {T}}\) and yields an explicit expression for \( u^{\scriptscriptstyle {T}}\). For comparison, the corresponding equation in the \( \Phi ^4_3\)-model (cf. [6, (14)]) is a linear integral equation. We now start to estimate the drift \( u^{\scriptscriptstyle {T}}\).

Lemma 4.1

For all \( 1 \le M \le N \), all \( S \ge 0 \), and all \( 0< \gamma < \min (1,\beta ) \), it holds that

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T,N}} \Big [ \int _0^{\tau _M \wedge S} \Vert u^{\scriptscriptstyle {T}}_s\Vert _{L^2}^2 \, {\mathrm {d}}s\Big ] \lesssim \max ( S^{1-2\gamma },1). \end{aligned}$$
(4.13)

In particular, it holds that

$$\begin{aligned} {\mathbb {Q}}^{u}_{T,N}\big ( \tau _{\scriptscriptstyle {T},\scriptscriptstyle {M}}\le S \big ) \lesssim \frac{\max (S^{1-2\gamma },1)}{M}. \end{aligned}$$
(4.14)

Proof

We recall from the definition of the drift measure that

As a result, we obtain that

For the first summand, we obtain from the definition of \( J_s^{\scriptscriptstyle {T}}\) and Lemma 2.16 that

For the second summand, we obtain from Lemma 2.23 that

This yields the desired estimate. \(\square \)

Lemma 4.2

For all \( 1 \le M \le N \), \( 1 \le p <\infty \), and \( \gamma < \min (1/2,\beta )\), it holds that

$$\begin{aligned} \sup _{T,t \ge 0} \Big ( {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T,N}} \big [ \Vert I_t[u^{\scriptscriptstyle {T},\scriptscriptstyle {M}}] \Vert _{{\mathcal {C}}^{\frac{1}{2}+\gamma }_x({\mathbb {T}}^3)}^p \big ] \Big )^{\frac{1}{p}} \lesssim _p 1 . \end{aligned}$$

Furthermore, we have that for any \( 0<\alpha < 1 \) and \( 0< \eta < 1/2 \) that

$$\begin{aligned} \sup _{T\ge 0} \Big ( {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T,N}}\big [ \Vert I[u^{\scriptscriptstyle {T},\scriptscriptstyle {M}}]\Vert _{{\mathcal {C}}_t^{\alpha ,\eta } {\mathcal {C}}_x^0 ([0,\infty ]\times {\mathbb {T}}^3)}^p \big ] \Big )^{\frac{1}{p}} \lesssim _p 1, \end{aligned}$$
(4.15)

where the \( {\mathcal {C}}_t^{\alpha ,\eta } {\mathcal {C}}_x^0 \)-norm is as in (1.18).

The proof of Lemma 4.2 is easier than its counterpart [6, (16)] in the \( \Phi ^4_3\)-model, which requires a Gronwall argument. The second estimate (4.14) is needed for technical reasons related to tightness, and we encourage the reader to skip its proof on first reading.

Proof

The argument is similar to the proof of Lemma 4.1. From the definition of \( u^{\scriptscriptstyle {T},\scriptscriptstyle {M}}\) and \( u^{\scriptscriptstyle {T},\scriptscriptstyle {N}}\), we have that

$$\begin{aligned} u^{\scriptscriptstyle {T},\scriptscriptstyle {M}}_s= 1 \{ s \le \tau _{\scriptscriptstyle {T},\scriptscriptstyle {M}}\} u^{\scriptscriptstyle {T},\scriptscriptstyle {N}}_s. \end{aligned}$$
(4.16)

Thus, we obtain that

$$\begin{aligned} \Vert I_t[u^{\scriptscriptstyle {T},\scriptscriptstyle {M}}] \Vert _{{\mathcal {C}}_x^{\frac{1}{2}+\gamma }} \le \int _0^{t \wedge \tau _{\scriptscriptstyle {T},\scriptscriptstyle {M}}} \Vert J_s u^{\scriptscriptstyle {T},\scriptscriptstyle {N}}_s\Vert _{{\mathcal {C}}_x^{\frac{1}{2}+\gamma }} \, {\mathrm {d}}s\le \int _0^t \Vert J_s u^{\scriptscriptstyle {T},\scriptscriptstyle {N}}_s\Vert _{{\mathcal {C}}_x^{\frac{1}{2}+\gamma }} . \end{aligned}$$
(4.17)

Using the integral equation (4.2) again, we obtain that

(4.18)

Using that

we obtain from Lemmas 2.20 and 2.23 that

This completes the proof of the first estimate. The second estimate (4.14) follows from a minor modification of the proof. To simplify the notation, we set

For any \( K \ge 1 \), we have from a similar argument as in (4.17) that

$$\begin{aligned} \sup _{ \begin{array}{c} 0 \le t^\prime \le t :\\ t,t^\prime \sim K \end{array}} \frac{\Vert I_t[u^{\scriptscriptstyle {T},\scriptscriptstyle {M}}] - I_{t^\prime }[u^{\scriptscriptstyle {T},\scriptscriptstyle {M}}]\Vert _{L_x^\infty }}{1\wedge |t-t^\prime |^\alpha }&\quad \lesssim \sup _{ \begin{array}{c} 0 \le t^\prime \le t :\\ t,t^\prime \sim K \end{array}} \frac{1}{1\wedge |t-t^\prime |^\alpha } \int _{t^\prime }^{t} A(s) \, {\mathrm {d}}s\\&\quad \lesssim \int _{s\sim K} A(s) \, {\mathrm {d}}s+ \Big ( \int _{s \sim K} A(s)^{\frac{1}{1-\alpha }} \, {\mathrm {d}}s\Big )^{1-\alpha }. \end{aligned}$$

Proceeding as in the first estimate, this implies that

$$\begin{aligned} \Big ( {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T,N}} \Big [ \Big (\sup _{ \begin{array}{c} 0 \le t^\prime \le t :\\ t,t^\prime \sim K \end{array}} \frac{\Vert I_t[u^{\scriptscriptstyle {T},\scriptscriptstyle {M}}] - I_{t^\prime }[u^{\scriptscriptstyle {T},\scriptscriptstyle {M}}]\Vert _{L_x^\infty }}{1\wedge |t-t^\prime |^\alpha } \Big )^p \Big ] \Big )^{\frac{1}{p}} \lesssim K^{-\frac{1}{2}-\gamma }. \end{aligned}$$

The desired estimate of the \( {\mathcal {C}}_t^{\alpha ,\eta } {\mathcal {C}}_x^0\)-norm then follows by summing over dyadic scales and using a telescoping series if the times are not comparable. \(\square \)

In Lemmas 4.1 and 4.2, we controlled the process \( u^{\scriptscriptstyle {T}}\) with respect to the measures \( {\mathbb {Q}}^{u}_{T,N}\). Unfortunately, the proof of Proposition 4.4 below also requires the absence of finite-time blowup for \( u^{\scriptscriptstyle {T}}\) with respect \( {\mathbb {P}}\). This is the subject of the next lemma.

Lemma 4.3

For any \( T \ge 1 \), it holds that \( T_{\exp }[u^{\scriptscriptstyle {T}}] = \infty ~ {\mathbb {P}}\)-almost surely.

The proof of the analogue for the \( \Phi ^4_3\)-model (cf. [6, Lemma 5]) extends verbatim to our situation and we omit the minor modifications. To ease the reader’s mind, let us briefly explain why the same argument applies here. In most of this section, the most important term in the integral equation (4.2) is the first summand. It has the lowest regularity and is closely tied to the interactions in the Hamiltonian. The result of Lemma 4.3, however, is essentially a soft statement. If we fix a time \( S \ge 1 \) and only want to rule out \( T_{\exp }[u^{\scriptscriptstyle {T}}] \le S \), the low regularity is inessential and only leads to a loss in powers of S. The main term is then given by the (auxiliary) second summand, which is defocusing and exactly the same as in the \( \Phi ^4_3\)-model.

The next proposition eliminates the stopping time from our drift measures.

Proposition 4.4

The family of measures \( ({\mathbb {Q}}^{u}_{T,N})_{T,N\ge 0} \) is tight on \( {\mathcal {C}}_t^0 {\mathcal {C}}_x^{-1/2-\kappa }([0,\infty ]\times {\mathbb {T}}^3) \). For any fixed \( T \ge 0 \), the sequence of measures \( ({\mathbb {Q}}^{u}_{T,N})_{N\ge 0} \) weakly converges to a measure \( {\mathbb {Q}}^{u}_{T}\) as \( N \rightarrow \infty \). For any \( S \ge 0 \), the limiting measure \( {\mathbb {Q}}^{u}_{T}\) satisfies

$$\begin{aligned} \frac{{\mathrm {d}}{\mathbb {Q}}^{u}_{T}|_{{\mathcal {F}}_S}}{{\mathrm {d}}{\mathbb {P}}|_{{\mathcal {F}}_S}} = \exp \Big ( \int _0^S \int _{{\mathbb {T}}^3}u^{\scriptscriptstyle {T}}_s{\mathrm {d}}B_s - \frac{1}{2} \int _0^S \Vert u^{\scriptscriptstyle {T}}_s\Vert _{L^2}^2 \, {\mathrm {d}}s\Big ). \end{aligned}$$
(4.19)

Our argument differs from the proof of [6, Lemma 7], which is the analog for the \( \Phi ^4_3 \)-model. The argument in [6] relies on Kolmogorov’s extension theorem, whereas we rely on tightness and Prokhorov’s theorem. This is important in the proof of Corollary 4.5 below, since the measures \( {\mathbb {Q}}^{u}_{T}\) are not (completely) consistent. We also believe that this clarifies the mode of convergence. Before we begin with the proof, we state the following corollary.

Corollary 4.5

The measures \( {\mathbb {Q}}^{u}_{T}\) weakly convergence to a measure \( {\mathbb {Q}}^u_{\infty }\) on \( {\mathcal {C}}_t^0 {\mathcal {C}}_x^{-1/2-\kappa }([0,\infty ]\times {\mathbb {T}}^3) \) as \( T \rightarrow \infty \). For any \(S \ge 0 \), it holds that

$$\begin{aligned} \frac{{\mathrm {d}}{\mathbb {Q}}^u_{\infty }|_{{\mathcal {F}}_S}}{{\mathrm {d}}{\mathbb {P}}|_{{\mathcal {F}}_S}} = \exp \Big ( \int _0^S \int _{{\mathbb {T}}^3} u_s {\mathrm {d}}B_s - \frac{1}{2} \int _0^S \Vert u_s \Vert _{L^2}^2 \, {\mathrm {d}}s\Big ), \end{aligned}$$
(4.20)

where \( u_s \) is as in (4.3).

Proof of Proposition 4.4

We first prove that the family of measures \( ({\mathbb {Q}}^{u}_{T,N})_{T,N\ge 0} \), viewed as measures for W, are tight on \( {\mathcal {C}}_t^0 {\mathcal {C}}_x^{-1/2-\kappa }([0,\infty ]\times {\mathbb {T}}^3) \). From (4.8), we have that

(4.21)

Since the law of under \( {\mathbb {Q}}^{u}_{T,N}\) agrees with the law of W under \( {\mathbb {P}}\), an application of Kolmogorov’s continuity theorem (cf. [40, Theorem 4.3.2]) yields for any \( p \ge 1 \), \( 0< \alpha < \frac{1}{2} \), and \( 0< \eta <\kappa /2 \) that

Together with (4.20) and Lemma 4.2, this implies

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T,N}} \Big [ \Vert W \Vert _{{\mathcal {C}}_t^{\alpha ,\eta } {\mathcal {C}}_x^{-(1+\kappa )/2}}^p\Big ] \lesssim _p 1 \end{aligned}$$

Since the embedding \( {\mathcal {C}}_t^{\alpha ,\eta } {\mathcal {C}}_x^{-(1+\kappa )/2} \hookrightarrow {\mathcal {C}}_t^{0} {\mathcal {C}}_x^{-1/2-\kappa } \) is compact, this implies the tightness of the family of measures \( ({\mathbb {Q}}^{u}_{T,N})_{T,N\ge 0} \).

By Prokhorov’s theorem, a subsequence of \( ({\mathbb {Q}}^{u}_{T,N})_N \) weakly converges to a measure \( {\mathbb {Q}}^{u}_{T}\). Once we proved (4.18), this can be upgraded to weak convergence of the full sequence, since (4.18) uniquely identifies the limit. With a slight abuse of notation, we therefore ignore this distinction between a subsequence and the full sequence.

Let \( S \ge 0 \) and let \( f:{\mathcal {C}}_t^0 {\mathcal {C}}_x^{-1/2-\kappa }([0,S]\times {\mathbb {T}}^3) \rightarrow {\mathbb {R}}\) be continuous, bounded, and nonnegative. We write f(W) for \( f(W|_{[0,S]}) \). Using the weak convergence of \( {\mathbb {Q}}^{u}_{T,N}\) to \( {\mathbb {Q}}^{u}_{T}\), we have that

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T}}[f(W)]= & {} \lim _{N\rightarrow \infty } {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T,N}}[f(W)] = \lim _{N\rightarrow \infty } \Big ( {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T,N}}[ 1\{\tau _{\scriptscriptstyle {T},\scriptscriptstyle {N}}\ge S\} f(W)]\\&+\, {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T,N}}[ 1\{\tau _{\scriptscriptstyle {T},\scriptscriptstyle {N}}< S\} f(W)] \Big ). \end{aligned}$$

Using Lemma 4.2, the second term is controlled by

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T,N}}[ 1\{\tau _{\scriptscriptstyle {T},\scriptscriptstyle {N}}< S\} f(W)] \le \Vert f \Vert _{\infty } {\mathbb {Q}}^{u}_{T,N}( \tau _{\scriptscriptstyle {T},\scriptscriptstyle {N}}< S) \lesssim \Vert f\Vert _\infty \frac{\max (S^{1-2\gamma },1)}{N}, \end{aligned}$$

which converges to zero as \( N \rightarrow \infty \). Together with the definition of \( {\mathbb {Q}}^{u}_{T,N}\) and the martingale property of the Girsanov density, this implies

$$\begin{aligned}&{\mathbb {E}}_{{\mathbb {Q}}^{u}_{T}}[f(W)] \\&\quad = \lim _{N\rightarrow \infty } {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T,N}}[ 1\{\tau _{\scriptscriptstyle {T},\scriptscriptstyle {N}}\ge S\} f(W)] \\&\quad = \lim _{N\rightarrow \infty } {\mathbb {E}}_{{\mathbb {P}}} \Big [ f(W) 1\{ \tau _{\scriptscriptstyle {T},\scriptscriptstyle {N}}\ge S\} \exp \Big ( \int _0^{\tau _{\scriptscriptstyle {T},\scriptscriptstyle {N}}} u^{\scriptscriptstyle {T}}_s{\mathrm {d}}B_s - \frac{1}{2} \int _0^{\tau _{\scriptscriptstyle {T},\scriptscriptstyle {N}}} \Vert u^{\scriptscriptstyle {T}}_s\Vert _{L^2}^2 \, {\mathrm {d}}s\Big ) \Big ] \\&\quad = \lim _{N\rightarrow \infty } {\mathbb {E}}_{{\mathbb {P}}} \Big [ f(W)1\{ \tau _{\scriptscriptstyle {T},\scriptscriptstyle {N}}\ge S\}\\&\qquad \times \exp \Big ( \int _0^{S} u^{\scriptscriptstyle {T}}_s{\mathrm {d}}B_s - \frac{1}{2} \int _0^{S} \Vert u^{\scriptscriptstyle {T}}_s\Vert _{L^2}^2 \, {\mathrm {d}}s\Big ) \Big ]. \end{aligned}$$

Using monotone convergence and Lemma 4.3, we obtain

$$\begin{aligned}&\lim _{N\rightarrow \infty } {\mathbb {E}}_{{\mathbb {P}}} \Big [ f(W)1\{ \tau _{\scriptscriptstyle {T},\scriptscriptstyle {N}}\ge S\} \exp \Big ( \int _0^{S} u^{\scriptscriptstyle {T}}_s{\mathrm {d}}B_s - \frac{1}{2} \int _0^{S} \Vert u^{\scriptscriptstyle {T}}_s\Vert _{L^2}^2 \, {\mathrm {d}}s\Big ) \Big ] \\&\quad = {\mathbb {E}}_{{\mathbb {P}}} \Big [ f(W)1\{ T_{\exp }[u^{\scriptscriptstyle {T}}]> S\} \exp \Big ( \int _0^{S} u^{\scriptscriptstyle {T}}_s{\mathrm {d}}B_s - \frac{1}{2} \int _0^{S} \Vert u^{\scriptscriptstyle {T}}_s\Vert _{L^2}^2 \, {\mathrm {d}}s\Big ) \Big ]\\&\quad = {\mathbb {E}}_{{\mathbb {P}}} \Big [ f(W) \exp \Big ( \int _0^{S} u^{\scriptscriptstyle {T}}_s{\mathrm {d}}B_s - \frac{1}{2} \int _0^{S} \Vert u^{\scriptscriptstyle {T}}_s\Vert _{L^2}^2 \, {\mathrm {d}}s\Big ) \Big ]. \end{aligned}$$

\(\square \)

Proof of Corollary 4.5

Due to Proposition 4.4, the family of measures \( ({\mathbb {Q}}^{u}_{T})_{T\ge 0} \) is tight. By Prokhorov’s theorem, it follows that a subsequence weakly converges to a measure \( {\mathbb {Q}}^u_{\infty }\). Once (4.19) is proven, it uniquely identifies the limit \( {\mathbb {Q}}^u_{\infty }\). With a slight abuse of notation, we therefore assume as before that the whole sequence \( {\mathbb {Q}}^{u}_{T}\) converges weakly to \( {\mathbb {Q}}^u_{\infty }\).

Since and \( I_t^{\scriptscriptstyle {T}}= I_t \) for all \( 0 \le t \le T/4\) (by our choice of \(\rho \)), it follows from the integral equation (4.2) that \( u^{\scriptscriptstyle {T}}_s= u_s \) for all \( 0 \le s \le T/4 \). Using (4.18), it follows for all \(S \le T/4 \) that

$$\begin{aligned} \frac{{\mathrm {d}}{\mathbb {Q}}^{u}_{T}|_{{\mathcal {F}}_S}}{{\mathrm {d}}{\mathbb {P}}|_{{\mathcal {F}}_S}} = \exp \Big ( \int _0^S \int _{{\mathbb {T}}^3}u_s {\mathrm {d}}B_s - \frac{1}{2} \int _0^S \Vert u_s \Vert _{L^2}^2 \, {\mathrm {d}}s\Big ). \end{aligned}$$
(4.22)

The corresponding identity (4.19) for \( {\mathbb {Q}}^u_{\infty }\) then follows by taking \( T \rightarrow \infty \). \(\square \)

Corollary 4.6

For any \( T \ge 1 \), \( S \ge 1 \), and any \( 0<\gamma < \min (\beta ,1/2) \), the measure \( {\mathbb {Q}}^{u}_{T}\) satisfies the two estimates

$$\begin{aligned} {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T}} \Big [ \int _0^{S} \Vert u^{\scriptscriptstyle {T}}_s\Vert _{L^2}^2 \, {\mathrm {d}}s\Big ]&\lesssim \max ( S^{1-2\gamma },1), \\ \sup _{t\ge 0} \Big ( {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T}} \big [ \Vert I_t[u^{\scriptscriptstyle {T}}] \Vert _{{\mathcal {C}}_x^{\frac{1}{2}+\gamma }}^p \big ] \Big )^{\frac{1}{p}}&\lesssim _p 1. \end{aligned}$$

The corollary directly follows from Lemmas 4.1,  4.2, and Proposition 4.4.

5.2 Absolutely continuity with respect to the drift measure

We recall the definition of the measure \( {\widetilde{\mu }}_{T}\) from (2.10), which states that

(4.23)

Using Proposition 4.4, we obtain that

(4.24)

Since \( {\mathrm {d}}B_t = {\mathrm {d}}B_t^{u^{\scriptscriptstyle {T}}} + u^{\scriptscriptstyle {T}}_t{\mathrm {d}}t \), we also obtain that

(4.25)

Proposition 4.7

(\(L^q\)-bounds). If \( n\in {\mathbb {N}} \) in the definition of \( u^{\scriptscriptstyle {T}}\) is odd and sufficiently large, there exists a \( q > 1 \) such that

$$\begin{aligned} \sup _{T\ge 0} {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T}} \Big [ |D_T|^q \Big ] \lesssim _{n,q} 1. \end{aligned}$$
(4.26)

Remark 4.8

We point out two important differences between Proposition 4.7 and the corresponding result for the \( \Phi ^4_3\)-model in [6, Lemma 9]. The first difference is a consequence of working with \( {\widetilde{\mu }}_{T}\) instead of \( \bar{\mu }_T \) as described in Sect. 2.1. Barashkov and Gubinelli define and bound the density \( D_T \) with respect to the same measure \( {\mathbb {Q}}^u_{\infty }\) for all \( T \ge 1 \). In contrast, our density is defined with respect to \( {\mathbb {Q}}^{u}_{T}\) and we make no statements about the behavior of \( D_T \) with respect to \( {\mathbb {Q}}^u_S \) for any \( S \ne T \). Since the increments of \( T \mapsto \rho _T(\nabla ) W_\infty \) are not independent, such a statement would be especially difficult if S and T are close. The second difference is a result of the smoothing effect of the interaction potential V. While the Hartree-nonlinearity allows us to prove the full \( L^q \)-bound (4.25), the corresponding result in the \( \Phi ^4_3\)-model requires the localizing factor \( \exp ( - \Vert W_\infty \Vert _{{\mathcal {C}}_x^{-1/2-\epsilon }}^n)\).

The rest of this subsection is dedicated to the proof of the \( L^q\)-bounds (Proposition 4.7). Since we intend to apply the Boué-Dupuis formula to bound the density \( D_T \) in \( L^q({\mathbb {Q}}^{u}_{T}) \), we first study the effect of shifts in \( B^{u^{\scriptscriptstyle {T}}} \) on the integral equation (4.2). For any \( w \in {\mathbb {H}}_a\), we define

Using the cubic binomial formula (Lemma 2.11), we obtain that

(4.27)

where the remainder \( r_s^{\scriptscriptstyle {T},w}\) is given by

We also define \( h^{\scriptscriptstyle {T},w}= w + u^{\scriptscriptstyle {T},w}\). We further decompose

Before we begin the main argument, we prove the following auxiliary lemma.

Lemma 4.9

(Estimate of \({\widetilde{r}}_t^{\scriptscriptstyle {T},w}\)). Let \( \epsilon ,\delta >0 \) be small absolute constants and let \( n\ge n(\delta ,\beta ) \) be sufficiently large. Then, we have for all \( t \ge 0 \) that

(4.28)

Remark 4.10

We emphasize that the implicit constant does not depend on \( \epsilon \). In the application of Lemma 4.9, we will choose \( \epsilon >0 \) sufficiently small depending on \( \delta ,n,\beta ,\lambda \).

Proof

In the following argument, the implicit constants are allowed to depend on \( n,\delta ,\beta , \) and \( \lambda \) but not on \( \epsilon \). We estimate the five terms in \( {\widetilde{r}}_t^{\scriptscriptstyle {T},w} \) separately and do not require any new ingredients. We only rely on Lemma 2.16, Proposition 3.7, Hölder’s inequality, and Bernstein’s inequality.

For the first term, we have from the definition of \( J_t^{\scriptscriptstyle {T}}\) and Lemma 2.16 that

For the second term, we have from duality and Proposition 3.7 for all \( 0< \gamma < \min (\beta ,1) \) that

For the third term, we estimate

In the last line, we use [6, Lemma 20].

The fourth term can be estimated exactly like the third term. To estimate the fifth term, we only rely on Hölder’s inequality, Bernstein’s inequality, and the Fourier support condition of \( I_t^{\scriptscriptstyle {T}}[w] \). We have that

$$\begin{aligned}&\Vert J_t^{\scriptscriptstyle {T}}\Big ( (V* I_t^{\scriptscriptstyle {T}}[w] ^2) I_t^{\scriptscriptstyle {T}}[w] \Big ) \Vert _{L_x^2}^2 \lesssim \langle t \rangle ^{-3} \Vert (V* I_t^{\scriptscriptstyle {T}}[w] ^2) I_t^{\scriptscriptstyle {T}}[w] \Vert _{L_x^2}^2 \lesssim \langle t \rangle ^{-3} \Vert I_t^{\scriptscriptstyle {T}}[w] \Vert _{L_x^6}^6\\&\quad \lesssim \langle t \rangle ^{-3+\frac{\delta }{2}} \Vert I_t^{\scriptscriptstyle {T}}[w] \Vert _{L_x^{\frac{6}{6+\delta }}}^6 \\&\quad \lesssim \langle t\rangle ^{-3+\frac{\delta }{2}} \Vert I_t^{\scriptscriptstyle {T}}[w] \Vert _{{\mathbb {W}}_x^{-\frac{1}{2},\frac{4}{\delta }}}^{4} \Vert I_t^{\scriptscriptstyle {T}}[w] \Vert _{H_x^1}^2 \lesssim \langle t \rangle ^{-3+2\delta } \Vert I_t^{\scriptscriptstyle {T}}[w] \Vert _{{\mathbb {W}}_x^{-\frac{1}{2},\frac{4}{\delta }}}^{4+\delta } \Vert I_t^{\scriptscriptstyle {T}}[w] \Vert _{H_x^1}^{2-\delta } \\&\quad \lesssim C_\epsilon \langle t \rangle ^{-3+2\delta } + \epsilon \langle t \rangle ^{-3+2\delta } \Big ( \Vert I_t^{\scriptscriptstyle {T}}[w] \Vert _{{\mathbb {W}}_x^{-\frac{1}{2},n+1}}^{n+1} + \Vert I_t^{\scriptscriptstyle {T}}[w] \Vert _{H_x^1}^{2}\Big ). \end{aligned}$$

In the second last inequality, we used that \( \Vert I_t^{\scriptscriptstyle {T}}[w] \Vert _{H^1_x} \lesssim \langle t \rangle ^{\frac{3}{2}} \Vert I_t^{\scriptscriptstyle {T}}[w] \Vert _{H_x^{-1/2}} \). This completes the estimate of all five terms in \( {\widetilde{r}}^{\scriptscriptstyle {T},w}_t \) and hence the proof. \(\square \)

Equipped with Lemma 4.9, we can now prove the \(L^q\)-bound for \(D_T\).

Proof of Proposition 4.7

The proof splits into two steps.

Step 1: Formulation as a variational problem. In order to prove the desired estimate (4.25), it suffices to obtain a lower bound on \( -\log {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T}} [ D_T^q ] \). Using the Boué-Dupuis formula, we obtain

Since \( T\mapsto \int _0^T \int _{{\mathbb {T}}^3} u_t^{\scriptscriptstyle {T},w}{\mathrm {d}}B^{u^{\scriptscriptstyle {T}}}_t \) is a martingale, its expectation vanishes. We now insert the change of variables \( u^{\scriptscriptstyle {T},w}= h^{\scriptscriptstyle {T},w}-w \) into the formula above, and obtain that

Since we want to obtain a lower bound, the most dangerous term in the expression above is \( -\frac{q-1}{2} \int _0^\infty \Vert w_t\Vert _{L^2}^2 \, {\mathrm {d}}t\). Using our previous information about the variational problem (Propositions 3.1 and 3.3) and the nonnegativity of \( {\mathcal {V}}(I_\infty ^{\scriptscriptstyle {T}}[h^{\scriptscriptstyle {T},w}]) \), we obtain that

$$\begin{aligned}&-\log {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T}}[ D_T^q] \ge -C + \inf _{w\in {\mathbb {H}}_a} {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T}} \nonumber \\&\qquad \times \bigg [ \frac{1}{4} \int _0^\infty \Vert l^T_t(h^{\scriptscriptstyle {T},w})\Vert _{L^2}^2 \, {\mathrm {d}}t- \frac{q-1}{2} \int _0^\infty \Vert w_t\Vert _{L^2}^2 \, {\mathrm {d}}t\bigg ]. \end{aligned}$$
(4.29)

Recalling the definition of \( l_t^T(h^{\scriptscriptstyle {T},w}) \) from Proposition 3.1 and (4.26), we obtain that

Together with our previous estimate, this leads to

$$\begin{aligned}&-\log {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T}}[ D_T^q] \ge -C + \inf _{w\in {\mathbb {H}}_a} {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T}} \\&\qquad \times \bigg [ \frac{1}{4} \int _0^\infty \Vert w_t + r^{\scriptscriptstyle {T},w}_t\Vert _{L^2}^2 \, {\mathrm {d}}t- \frac{q-1}{2} \int _0^\infty \Vert w_t \Vert _{L^2}^2 \, {\mathrm {d}}t\bigg ]. \end{aligned}$$

By choosing q sufficiently close to one, it only remains to establish

$$\begin{aligned} {\mathbb {E}}\int _0^\infty \Vert w_t \Vert _{L^2}^2 \, {\mathrm {d}}t\lesssim 1+ {\mathbb {E}}\int _0^\infty \Vert w_t + r^{\scriptscriptstyle {T},w}_t \Vert _{L^2}^2 \, {\mathrm {d}}t. \end{aligned}$$
(4.30)

This bound is proven via a Gronwall-type argument.

Step 2: Gronwall-type argument. This step crucially relies on the smoother term in the definition of the drift (4.2). We essentially follow the proof of [6, Lemma 11]. As in [6], we introduce the auxiliary process

(4.31)

With this notation, it holds that . We then expand

(4.32)

Using Itô’s integration by parts formula, we have for all \( s \le t \) that

Due to the martingale property, the second summand has zero expectation. After setting

(4.33)

we obtain from (4.31) that

(4.34)

We perform the Gronwall-type argument based on the quantity \( \Phi (t) \), which is defined by

$$\begin{aligned} \Phi (t)\overset{\text {def}}{=}{\mathbb {E}}\int _0^t \Vert w_s \Vert _{L^2}^2 \, {\mathrm {d}}s+ \Vert I_t^{\scriptscriptstyle {T}}[w] \Vert _{{\mathbb {W}}^{-\frac{1}{2},n+1}_x}^{n+1}. \end{aligned}$$
(5.1)

By [6, Lemma 12] and (4.33), we have that

From Lemma 4.9, we obtain for \( \epsilon , \delta >0 \) that

$$\begin{aligned} \Phi (t)&\lesssim _\delta 1+ {\mathbb {E}}\bigg [\int _0^t \Vert r_s^{\scriptscriptstyle {T},w} + w_s \Vert _{L^2}^2 \, {\mathrm {d}}s+ C_{\epsilon } \int _0^t \langle s \rangle ^{-1-\delta } Q_s({\mathbb {W}},\lambda ) \, {\mathrm {d}}s\bigg ]\\&\quad + \epsilon \int _0^t \langle s \rangle ^{-1-\delta } \Phi (s) \, {\mathrm {d}}s\\&\lesssim _\delta C_{\epsilon } + {\mathbb {E}}\bigg [\int _0^t \Vert r_s^{\scriptscriptstyle {T},w} + w_s \Vert _{L^2}^2 \, {\mathrm {d}}s\bigg ] + \epsilon \sup _{0\le s\le t} \Phi (s). \end{aligned}$$

By choosing \( \epsilon >0 \) sufficiently small depending on \( \delta \), this implies the desired estimate. \(\square \)

5.3 The reference measure

Using our construction of the drift measures \( {\mathbb {Q}}^{u}_{T}\), we now provide a short proof of Theorem 1.4. As in the rest of this section, we use the truncation parameter T.

Proof of Theorem 1.4

For any \(1\le T\le \infty \), we define the reference measure \(\nu _T\) as

$$\begin{aligned} \nu _T \overset{\text {def}}{=}(W_\infty )_\# {\mathbb {Q}}^{u}_{T}. \end{aligned}$$

By using the \(L^q\)-bound (Proposition 4.7), we have that for all Borel sets \(A\subseteq {\mathcal {C}}_x^{-1/2-\kappa }({\mathbb {T}}^3)\) that

$$\begin{aligned} \mu _T(A)= & {} {\widetilde{\mu }}_T(W_\infty \in A) = {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T}} \big [ 1\big \{ W_\infty \in A \big \} \, D_{T}\big ] \\\le & {} \Big ( {\mathbb {E}}_{{\mathbb {Q}}^{u}_{T}} \big [ D_{T}^q \big ] \Big )^{\frac{1}{q}} {\mathbb {Q}}^{u}_{T}(W_\infty \in A)^{1-\frac{1}{q}} \\\lesssim & {} \nu _T(A)^{1-\frac{1}{q}}. \end{aligned}$$

This proves the first part of Theorem 1.4. Regarding the representation of \(\nu _T\), which forms the second part of Theorem 1.4, we have that

$$\begin{aligned} \nu _T&= {{\,\mathrm{Law}\,}}_{{\mathbb {Q}}^{u}_{T}}(W_\infty ) \\&= {{\,\mathrm{Law}\,}}_{{\mathbb {Q}}^{u}_{T}}( W^u_\infty + I_\infty [u^{\scriptscriptstyle {T}}]) \\&= {{\,\mathrm{Law}\,}}_{{\mathbb {Q}}^{u}_{T}} \Big ( W_\infty ^u - \lambda \rho _T(\nabla ) \int _0^\infty J_s^2 :( V* (W_s^{\scriptscriptstyle {T},u})^2 ) W_s^{\scriptscriptstyle {T},u} : \, {\mathrm {d}}s\\&\quad + \rho _T(\nabla ) \int _0^\infty \langle \nabla \rangle ^{-\frac{1}{2}} J_s^2 :\big ( \langle \nabla \rangle ^{-\frac{1}{2}} W_s^{\scriptscriptstyle {T},u} \big )^n : \, {\mathrm {d}}s\Big ) \\&= {{\,\mathrm{Law}\,}}_{{\mathbb {P}}} \Big ( W_\infty - \lambda \rho _T(\nabla ) \int _0^\infty J_s^2 :( V* (W_s^{\scriptscriptstyle {T}})^2 ) W_s^{\scriptscriptstyle {T}} :\, {\mathrm {d}}s\\&\quad + \rho _T(\nabla ) \int _0^\infty \langle \nabla \rangle ^{-\frac{1}{2}} J_s^2 :\big ( \langle \nabla \rangle ^{-\frac{1}{2}} W_s^{\scriptscriptstyle {T}} \big )^n :\, {\mathrm {d}}s\Big ). \end{aligned}$$

This completes the proof. \(\square \)

6 Singularity

In this section, we prove Theorem 1.5. The majority of this section deals with the singularity for \( 0< \beta < 1/2 \). The absolute continuity for \( \beta > 1/2 \) will be deduced from Corollary 3.4 and requires no new ingredients. Theorem 1.5 is important for the motivation of this series of papers, since we provide the first proof of invariance for a Gibbs measure which is singular with respect to the corresponding Gaussian free field. The methods of this section, however, will not be used in the rest of this two-paper series.

We prove the singularity of the Gibbs measure \( \mu _\infty \) and the Gaussian free field \( {\mathscr {g}} \) through the explicit event in Proposition 5.1.

Proposition 5.1

(Singularity). Let \( 0< \beta < \frac{1}{2} \) and let \( \delta >0 \) be sufficiently small. Then, there exists a (deterministic) sequence \( (S_m)_{m=1}^\infty \subseteq {\mathbb {R}}_{>0} \) converging to infinity such that

$$\begin{aligned} \lim _{m\rightarrow \infty } \frac{1}{S_m^{1-2\beta -\delta }} \int _{{\mathbb {T}}^3} :(V* (\rho _{S_m}(\nabla ) \phi )^2) (\rho _{S_m}(\nabla ) \phi )^2 :\,{\mathrm {d}}x= 0 \quad {\mathscr {g}} \text {-a.s.} \end{aligned}$$
(5.2)

and

$$\begin{aligned} \lim _{m\rightarrow \infty } \frac{1}{S_m^{1-2\beta -\delta }} \int _{{\mathbb {T}}^3} :(V* (\rho _{S_m}(\nabla ) \phi )^2) (\rho _{S_m}(\nabla ) \phi )^2 :\,{\mathrm {d}}x= -\infty \quad \mu _\infty \text {-a.s.} \end{aligned}$$
(5.3)

Here, \( {\mathscr {g}} \) is the Gaussian free field, \( \mu _\infty \) is the Gibbs measure, and \( \phi \in {\mathcal {C}}_x^{-\frac{1}{2}-\kappa }({\mathbb {T}}^3) \) denotes the random element.

Remark 5.2

In the statement of the proposition, the reader may wish to replace \( \phi \) by \( W_\infty \), \( {\mathscr {g}} \) by \( {\mathbb {P}}\), and \( \mu _\infty \) by \( {\widetilde{\mu }}_\infty \). We choose the notation \( \phi \) to emphasize that this is a property of \( {\mathscr {g}} \) and \( \mu _\infty \) only and does not rely on the stochastic control perspective. Of course, the stochastic control perspective is heavily used in the proof.

To simplify the notation, we define

(5.4)

We note that the dependence on the interaction potential V is not reflected in this notation. We first study the behavior of the integral of with respect to \( {\mathbb {P}}\). This is the easier part of the proof and the statement (5.1) follows from the following lemma.

Lemma 5.3

(Quartic power under the Gaussian free field). Let \( 0< \beta < 1/2 \). Then, we have that

(5.5)

Proof

From Proposition 2.9, we obtain that

Since the iterated stochastic integrals are uncorrelated, we obtain that

It now only remains to estimate the sum. Provided that \( \beta < 1/2 \), we first sum in \( n_3 \), then \( n_2 \), and finally \( n_1 \) to obtain

$$\begin{aligned} \sum _{n_1,n_2,n_3 \in {\mathbb {Z}}^3 } \langle n_{123} \rangle ^{-2} \langle n_{12} \rangle ^{-2\beta } \prod _{j=1}^3 \frac{\rho ^{\scriptscriptstyle {S}}_{s}(n_j)^2}{\langle n_j \rangle ^2}&\lesssim \sum _{n_1,n_2\in {\mathbb {Z}}^3 } \langle n_{12} \rangle ^{-1-2\beta } \prod _{j=1}^2 \frac{\rho ^{\scriptscriptstyle {S}}_{s}(n_j)^2}{\langle n_j \rangle ^2}\\&\lesssim \sum _{n_1 \in {\mathbb {Z}}^3} \frac{\rho ^{\scriptscriptstyle {S}}_{s}(n_1)^2}{\langle n_1 \rangle ^{2+2\beta }} \lesssim S^{1-2\beta }. \end{aligned}$$

\(\square \)

We now begin our study of the integral under \( {\mathbb {Q}}^u_{\infty }\). Naturally, we would like to replace (most) occurrences of by , since the law of under \( {\mathbb {Q}}^u_{\infty }\) is explicit. This is the objective of our first (algebraic) lemma.

Lemma 5.4

For any \( S \ge 1 \), it holds that

(5.6)
(5.7)
(5.8)

where

(5.9)
(5.10)
(5.11)

Proof

Using (2.27) from Proposition 2.9 together with the integral equation for u, i.e. (4.3), we obtain that

(5.12)

From the cubic binomial formula (2.31) and the definition of , it follows that

Inserting this into (5.11) leads to the desired identity. \(\square \)

We begin by studying the right-hand side of (5.5), which is the main term. Our first lemma controls the expectation, which will be upgraded to a pointwise estimate later.

Lemma 5.5

If \( 0< \beta < 1/2 \) and \( S \ge 1 \) is sufficiently large, then

(5.13)

Proof

Since the law of under \( {\mathbb {Q}}^u_{\infty }\) coincides with the law of W under \( {\mathbb {P}}\), it holds that

(5.14)

The rest of the proof consists of a tedious but direct calculation. Using the real-valuedness of W and the stochastic integral representation (2.25), we have that

Taking expectations, we only obtain a non-trivial contribution for \( (n_1,n_2,n_3)=(m_1,m_2,m_3) \), and it follows that

By recalling that \( \sigma ^{\scriptscriptstyle {S}}_{s} = \rho _S \cdot \sigma _s \), integrating in s, using Lemma B.1, and symmetry considerations, we obtain that

where \( c,C>0 \) are small and large constants depending only on V. The only difference between the two terms lies in the power of \( \langle n_{12} \rangle \). The minor term can easily be estimated from above by

$$\begin{aligned}&\sum _{n_1,n_2,n_3\in {\mathbb {Z}}^3} \frac{\rho _S(n_{123}) }{\langle n_{123} \rangle ^2} \frac{1}{\langle n_{12} \rangle ^{1+2\beta }} \Big (\prod _{j=1}^3 \frac{\rho _S(n_j)}{\langle n_j \rangle ^2} \Big ) \int _0^\infty \sigma _s(n_{123})^2 \Big ( \prod _{j=1}^3 \rho _s(n_j)^2\Big ) \, {\mathrm {d}}s\\&\quad \lesssim \sum _{n_1,n_2,n_3\in {\mathbb {Z}}^3} \frac{1}{\langle n_{123} \rangle ^2 \langle n_{12} \rangle ^{1+2\beta } \langle n_1 \rangle ^2 \langle n_2 \rangle ^2 \langle n_3 \rangle ^2} \\&\quad \lesssim 1. \end{aligned}$$

Using Lemma B.5, the main term can be estimated from below by

$$\begin{aligned}&\sum _{n_1,n_2,n_3\in {\mathbb {Z}}^3} \frac{\rho _S(n_{123}) }{\langle n_{123} \rangle ^2} \frac{1}{\langle n_{12} \rangle ^{2\beta }} \Big (\prod _{j=1}^3 \frac{\rho _S(n_j)}{\langle n_j \rangle ^2} \Big ) \int _0^\infty \sigma _s(n_{123})^2 \Big ( \prod _{j=1}^3 \rho _s(n_j)^2\Big ) \, {\mathrm {d}}s\\&\quad > rsim \sum _{ \begin{array}{c} n_1,n_2,n_3\in {\mathbb {Z}}^3:\\ |n_j - S e_j |\le S/20 \end{array} } \frac{1}{\langle n_{123} \rangle ^2 \langle n_{12} \rangle ^{2\beta } \langle n_1 \rangle ^2 \langle n_2 \rangle ^2 \langle n_3 \rangle ^2} \\&\quad > rsim S^{-8-2\beta } \# \{ (n_1,n_2,n_3) \in ({\mathbb {Z}}^3)^3 :|n_j - S e_j |\le S/20 \text { for } j=1,2,3 \} \\&\quad > rsim S^{1-2\beta }. \end{aligned}$$

This completes the proof of the lemma. \(\square \)

Before we can upgrade Lemma 5.5, we need the following estimate of the .

Lemma 5.6

Let \( 0< \beta < 1/2 \), let \( \delta >0 \) sufficiently small, and let \( k \ge 1 \) be sufficiently large depending on \( \beta \). For any \( v :{\mathbb {R}}_{>0} \times {\mathbb {T}}^3 \rightarrow {\mathbb {R}}\) and any \( j=1,2,3\), it then holds that

(5.15)

Remark 5.7

As is clear from the proof, this estimate can be slightly refined. Ignoring \( \delta \)-losses, the worst power \( \langle s \rangle ^{-1-2\beta } \) only occurs with \( \Vert I^{\scriptscriptstyle {S}}_s[v]\Vert _{H_x^{1-\beta }}^2 \) instead of \( \Vert I^{\scriptscriptstyle {S}}_s[v]\Vert _{H_x^{1}}^2 \). However, (5.14) is sufficient for our purposes.

Proof

We treat the estimates for \( j=1,2,3 \) separately. We first estimate , which consists of two terms. For the first summand, we have that

Provided that \( k > rsim \beta ^{-1} \), the desired statement follows from Young’s inequality. The estimate for the second summand is similar, except that in the second inequality above we use the random matrix estimate (Proposition 3.7) instead of Hölder’s inequality.

Next, we estimate . Let \( \eta >0 \) remain to be chosen. Using (B.6) from Lemma B.3, we can control the first term in by

After choosing \( \eta = 10 k^{-1} \), the desired estimate follows provided that \( k > rsim (1/2-\beta )^{-1} \). The only difference in the estimate of the second term in is that we use (B.5) instead of (B.6).

We now turn to the estimate of . Arguing exactly as in our estimate for , we obtain that

$$\begin{aligned} \Big \Vert J^{\scriptscriptstyle {S}}_s \Big ( (V* (I^{\scriptscriptstyle {S}}_s[v])^2 ) I^{\scriptscriptstyle {S}}_s[v] \Big ) \Big \Vert _{L_x^2}^2 \lesssim \langle s \rangle ^{-2+12 \delta + 8 \eta } \Vert I^{\scriptscriptstyle {S}}_s[v] \Vert _{{\mathcal {C}}_x^{-\frac{1}{2}-\delta }}^{4+\eta } \Vert I^{\scriptscriptstyle {S}}_s[v] \Vert _{H_x^{1}}^{2-\eta }. \end{aligned}$$

Using Young’s inequality, this contribution is acceptable. \(\square \)

We are now ready to upgrade our bound on the expectation from Lemma 5.5 into a pointwise statement. The main tool will be the Boué-Dupuis formula.

Lemma 5.8

For any \( \delta >0 \), there exists a sequence \( (S_m)_{m=1}^\infty \) converging to infinity such that

(5.16)

Proof

Let \( k \ge 1 \) remain to be chosen. We define the auxiliary function

(5.17)

We will now show that

$$\begin{aligned} \lim _{S\rightarrow \infty } {\mathbb {E}}_{{\mathbb {Q}}^u_{\infty }} \Big [ e^{-G_S}\Big ] =0 , \end{aligned}$$
(5.18)

which implies the desired result. We could switch from to \( (W,{\mathbb {P}}) \), which we have done several times above. Since the in (5.8)–(5.10) are defined in terms of , however, we decided not to change the measure.

We define \( A^j_s \) similar as in (5.8)–(5.10), but with \( J^{\scriptscriptstyle {S}}_s \) replaced by \( J_s \), \( I^{\scriptscriptstyle {S}}_s \) replaced by \( I_s \), and replaced by . Since all our estimates for were uniform in \(S \ge 1 \), they also hold for \( A^j \). Using the Boué-Dupuis formula (Theorem 2.1) and the cubic binomial formula, we have that

(5.19)
(5.20)
(5.21)
(5.22)

The main term is given by (5.18). By Lemma 5.5, we see that (5.18) converges to infinity as \( S \rightarrow \infty \). Thus, it remains to obtain a lower bound on the variational problem in (5.19)–(5.21). The terms in (5.19) are nonnegative and help with the lower bound. In contrast, the terms in (5.20) and (5.21) are viewed as errors and will be estimated in absolute value.

Regarding (5.19), we briefly note that

In the estimates below, we will often use that for all \( s \gg S \). We begin with the first term in (5.20). We have that

(5.23)

For the first term in (5.22), we obtain from Lemma 2.20 that

(5.24)

For the second term in (5.22), we obtain from Lemma 5.6 that

(5.25)

In the last line, we also Lemma B.4. Since \( S \rightarrow \infty \), this contribution can be absorbed in the coercive term (5.22). The estimate of the second summand in (5.20) is exactly the same.

Regarding the error terms in (5.21), we have that

The right-hand side can now be controlled using the same (or simpler) estimates as for the second summand in (5.22). This completes the proof. \(\square \)

Essentially the same estimates as in the previous proof can also be used to control the minor terms in (5.6) and (5.7). We record them in the following lemma.

Lemma 5.9

Let \( 0< \beta <1/2 \), let \( \delta >0 \) and let \(j=1,2,3\). Then, it holds that

(5.26)
(5.27)
(5.28)
(5.29)

Proof

We begin with the proof of (5.25). Using Itô’s isometry, we have that

Arguing essentially as in (5.23), we obtain that

which yields (5.25).

We now turn to (5.26). Using Lemma 5.6 and Corollary 4.6, we have for all \( \epsilon >0 \) that

(5.30)

Using Lemma 2.16 and (5.29), we obtain that

Next, we prove (5.26). Using Itô’s isometry and (5.29), we have that

Finally, we turn to (5.28), which is the most regular term. We first recall the algebraic identity . Then, Lemma 2.16 and (5.29) yield

(5.31)

From Lemma 2.23, we have that

(5.32)

By combining (5.30) and (5.31), we obtain

\(\square \)

We are now ready to prove the main result of this section.

Proof of Proposition 5.1

We recall from Lemma 5.4 that

(5.33)

where the remainder contains the terms from (5.6) and (5.7) with an additional \( S^{-1+2\beta +\delta }\). By Lemma 5.8, there exists a deterministic sequence \( S_m \) such that the first summand in (5.32) converges to \( - \infty \) almost surely with respect to \( {\mathbb {Q}}^u_{\infty }\). Since \( 0<\beta <1/2 \), we have that

$$\begin{aligned} 1-2\beta > \max \Big ( \frac{1}{2} -\beta , 1 - 3\beta , \frac{1}{2}-2\beta , 0 \Big ) . \end{aligned}$$

Using Lemma 5.9, this implies that the remainder converges to zero in \( L^1({\mathbb {Q}}^u_{\infty }) \). By passing to a subsequence if necessary, we can assume that converges to zero almost surely with respect to \( {\mathbb {Q}}^u_{\infty }\). Using (5.32), this implies that

$$\begin{aligned} \lim _{m\rightarrow \infty }\frac{1}{S_m^{1-2\beta -\delta }} \int _{{\mathbb {T}}^3} {\mathbb {W}}^{\scriptscriptstyle {S_m},4}_\infty \,{\mathrm {d}}x= -\infty \qquad {\mathbb {Q}}^u_{\infty }\text {-a.s.} \end{aligned}$$

Using \( \beta < 1/2 \) and Lemma 5.3, the integral converges to zero in \( L^2({\mathbb {P}}) \). By passing to another subsequence if necessary, we obtain that

$$\begin{aligned} \lim _{m\rightarrow \infty }\frac{1}{S_m^{1-2\beta -\delta }} \int _{{\mathbb {T}}^3} {\mathbb {W}}^{\scriptscriptstyle {S_m},4}_\infty \,{\mathrm {d}}x= 0 \qquad {\mathbb {P}}\text {-a.s.} \end{aligned}$$

Since \( \mu _\infty \) is absolutely continuous with respect to \( \nu _\infty = (W_\infty )_{\#} {\mathbb {Q}}^u_{\infty }\) and \( {\mathscr {g}}= {{\,\mathrm{Law}\,}}_{{\mathbb {P}}}(W_\infty ) \), this implies (5.1) and (5.2). \(\square \)

Equipped with Corollary 3.4 and Proposition 5.1, we now provide a short proof of Theorem 1.5.

Proof of Theorem 1.5

If \( 0< \beta < 1/2 \), then the mutual singularity of the Gibbs measure \( \mu _\infty \) and the Gaussian free field \( {\mathscr {g}} \) directly follows from Proposition 5.1.

If \( \beta > 1/2 \), we claim that for all \( p \ge 1 \) that

$$\begin{aligned} \frac{{\mathrm {d}}\mu _T}{{\mathrm {d}}{\mathscr {g}}} \in L^p({\mathscr {g}}) \end{aligned}$$
(A.1)

with uniform bounds in \( T\ge 1 \). Since \( \mu _T \) converges weakly to \( \mu _\infty \), this implies the absolute continuity \( \mu _\infty \ll {\mathscr {g}} \).

In order to prove the claim, we recall that \( \mu _T= (W_\infty )_\# {\widetilde{\mu }}_{T}\ \) and \( {\mathscr {g}} = (W_\infty )_\# {\mathbb {P}}\). Furthermore, we see from (2.10) that the density \({\mathrm {d}}{\widetilde{\mu }}_{T}/{\mathrm {d}}{\mathbb {P}}\) is a function of \(W_\infty \). As a result, we obtain for all \(p \ge 1\) that

$$\begin{aligned} \int \Big ( \frac{{\mathrm {d}}{\widetilde{\mu }}_{T}}{{\mathrm {d}}{\mathbb {P}}}\Big )^p {\mathrm {d}}{\mathbb {P}} = \int \Big ( \frac{{\mathrm {d}}\mu _T}{{\mathrm {d}}{\mathscr {g}}}\Big )^p {\mathrm {d}}{\mathscr {g}} \end{aligned}$$

Thus, it suffices to bound the density \( {\mathrm {d}}{\widetilde{\mu }}_{T}/ {\mathrm {d}}{\mathbb {P}}\) in \( L^p({\mathbb {P}}) \). From the definition of \( {\widetilde{\mu }}_{T}\) (Definition 2.3) and the definition of the renormalized potential energy in (3.2), we have that

The first two factors are uniformly bounded in T by Proposition 3.3 and Corollary 3.4. The last factor is uniformly bounded in \( L^1({\mathbb {P}}) \) for all \( T \ge 1 \) since we only replaced the coupling constant \( \lambda \) by \( p \lambda \). This completes the proof of the claim (5.33). \(\square \)