1 Introduction

We consider the stochastic Cahn–Hilliard equation with additive noise

$$\begin{aligned}&\mathrm {d} u =\Delta \Big {(}-\varepsilon \Delta u+\frac{1}{\varepsilon }f(u)\Big {)}\mathrm {d} t +\varepsilon ^{\gamma }g\, \mathrm {d} W\;\;\;\;&\text{ in }\;\; {{\mathcal {D}}}_T := (0,T) \times {\mathcal {D}}\, \end{aligned}$$
(1.1a)
$$\begin{aligned}&\partial _{n}u = \partial _{n}\Delta u =0\;\;\;\;&\text{ on }\;\; (0,T) \times \partial {\mathcal {D}}\ , \end{aligned}$$
(1.1b)
$$\begin{aligned}&u(0, \cdot )=u_0^\varepsilon \;\;\;\;&\text{ on }\;\;{\mathcal {D}}\, . \end{aligned}$$
(1.1c)

We fix \(T>0\), \(\gamma >0\), and \(\varepsilon >0\) is a (small) interfacial width parameter. For simplicity, we assume \({\mathcal {D}}\subset {\mathbb {R}}^{2}\) to be a convex, bounded polygonal domain, with \({n}\in {{\mathbb {S}}}^2\) the outer unit normal along \(\partial {\mathcal {D}}\), and \(W \equiv \{ W_t;\, 0 \le t \le T\}\) to be an \({{\mathbb {R}}}\)-valued Wiener process on a filtered probability space \((\Omega , {{\mathcal {F}}}, \{ {\mathcal F}_t\}_t, {{\mathbb {P}}})\). The function \(g \in C^{\infty }({\mathcal D})\) is such that \(\int _{{{\mathcal {D}}}} g \, {\mathrm{d}}x = 0\) to enable conservation of mass in (1.1), and \(\partial _{n}g =0\) on \(\partial {\mathcal {D}}\). Furthermore, we assume \(u^\varepsilon _0 \in {{\mathbb {H}}}^1\), and impose \(\int _{{\mathcal {D}}} u^{\varepsilon }_0\, {\mathrm{d}}x = 0\), for simplicity; generalization for arbitrary mean values is straightforward.

The nonlinear drift part f in (1.1) is the derivative of the double-well potential \(F(u):=\frac{1}{4}(u^2-1)^2\), i.e., \(f(u)=F'(u)=u^3-u\). Associated to the system (1.1) is the Ginzburg–Landau free energy

$$\begin{aligned} {\mathcal {E}}(u) = \int _{{{\mathcal {D}}}} \Big (\frac{\varepsilon }{2} |\nabla u|^2 + \frac{1}{\varepsilon }F(u)\Big )\, {\mathrm{d}}x\, . \end{aligned}$$

The particular case \(g \equiv 0\) in (1.1) leads to the deterministic Cahn–Hilliard equation which can be interpreted as the \({\mathbb {H}}^{-1}\)-gradient flow of the Ginzburg–Landau free energy. It is convenient to reformulate (1.1) as

$$\begin{aligned} \mathrm {d} u&=\Delta w \mathrm {d}t + \varepsilon ^{\gamma }g\, \mathrm {d}W&\,&\text{ in }\;\;{\mathcal {D}}_T, \end{aligned}$$
(1.2a)
$$\begin{aligned} w&=-\varepsilon \Delta u+\frac{1}{\varepsilon }f(u)&\,&\text{ in }\;\; {\mathcal {D}}_T\, , \end{aligned}$$
(1.2b)
$$\begin{aligned} \partial _{n}u&=\partial _{n}w=0&\,&\text{ on }\;\;(0,T) \times \partial {\mathcal {D}}\, , \end{aligned}$$
(1.2c)
$$\begin{aligned} u(0,\cdot )&= u_0^\varepsilon&\text{ on }\;\; {\mathcal {D}}\, , \end{aligned}$$
(1.2d)

where w denotes the chemical potential.

The Cahn–Hilliard equation has been derived as a phenomenological model for phase separation of binary alloys. The stochastic version of the Cahn–Hilliard equation, also known as the Cahn–Hilliard–Cook equation, has been proposed in [12, 21, 22]: here, the noise term is used to model effects of external fields, impurities in the alloy, or may describe thermal fluctuations or external mass supply. We also mention [18], where computational studies for (1.1) show a better agreement with experimental data in the presence of noise. For a theoretical analysis of various versions of the stochastic Cahn–Hilliard equation we refer to [8, 9, 13, 14]. Next to its relevancy in materials sciences, (1.1) is used as an approximation to the Mullins–Sekerka/Hele–Shaw problem; by the classical result [1], the solution of the deterministic Cahn–Hilliard equation is known to converge to the solution of the Mullins–Sekerka/Hele–Shaw problem in the sharp interface limit \(\varepsilon \downarrow 0\). A partial convergence result for the stochastic Cahn–Hilliard equation (1.1) has been obtained recently in [3] for a sufficiently large exponent \(\gamma \). We extend this work to eventually validate uniform convergence of iterates of the time discretization Scheme 3.1 to the sharp-interface limit of (1.1) for vanishing numerical (time-step k), and regularization (width \(\varepsilon \)) parameters: hence, the zero level set of the solution to the geometric interface of the Mullins–Sekerka problem is accurately resolved via Scheme 3.1 in the asymptotic limit.

It is well-known that an energy-preserving discretization, along with a proper balancing of numerical parameters and the interface width parameter \(\varepsilon \), is required for accurate simulation of the deterministic Cahn–Hilliard equation; see e.g. [16]: analytically, this balancing of scales allows to circumvent a straight-forward application of Gronwall’s lemma in the error analysis, which would otherwise cause a factor in a corresponding error estimate that grows exponentially in \(\varepsilon ^{-1}\). The present paper pursues a corresponding goal for a structure-preserving discretization of the stochastic Cahn–Hilliard equation (1.1); we identify proper discretization scales which allow a resolution of interface-driven evolutions, and thus avoid a Gronwall-type argument in the corresponding strong error analysis. This allows for practically relevant scaling scenarios of involved numerical parameters to accurately approximate solutions of (1.1) even in the asymptotic regime where \(\varepsilon \ll 1\).

The proof of a strong error estimate for a space–time discretization of (1.1) which causes only polynomial dependence on \(\varepsilon ^{-1}\) in involved stability constants uses the following ideas:

  1. (a)

    We use the time-implicit Scheme 3.1, whose iterates inherit the basic energy bound [see Lemma 3.1, (i)] from (1.1). We benefit from a weak monotonicity property of the drift operator in the proof of Lemma 3.4 to effectively handle the cubic nonlinearity in the drift part.

  2. (b)

    For \(\gamma >0\) sufficiently large, we view (1.1) as a stochastic perturbation of the deterministic Cahn–Hilliard equation (i.e., (1.1) with \(g \equiv 0\)), and proceed analogically also in the discrete setting. We then benefit in the proof of Lemma 3.4 from (the discrete version of) the spectral estimate (2.1) from [2, 11] for the deterministic Cahn–Hilliard equation (see Lemma 3.1, v)).

  3. (c)

    For the deterministic setting [16], an induction argument is used on the discrete level, which addresses the cubic error term (scaled by \(\varepsilon ^{-1}\)) in Lemma 3.4. This argument may not be generalized in a straightforward way to the current stochastic setting where the discrete solution is a sequence of random variables allowing for (relatively) large temporal variations. For this reason we consider the propagation of errors on two complementary subsets of \(\Omega \): on the large subset \(\Omega _2\) we verify the error estimate (Lemma 3.5), while we benefit from the higher-moment estimates for iterates of Scheme 3.1 from (a) to derive a corresponding estimate on the small set \(\Omega \setminus \Omega _2\) (see Corollary 3.7). A combination of both results then establishes our first main result: a strong error estimate for the numerical approximation of the stochastic Cahn–Hilliard equation (see Theorem 3.8), avoiding Gronwall’s lemma.

  4. (d)

    Building on the results from (c), and using an \({\mathbb {L}}^\infty \)-bound for the solution of Scheme 3.1 (Lemma 5.1), along with error estimates in stronger norms (Lemma 5.2), we show uniform convergence of iterates on large subsets of \(\Omega \) (Theorem 5.5). This intermediate result then implies the second main result of the paper: the convergence in probability of iterates of Scheme 3.1 to the sharp interface limit in Theorem 5.7 for sufficiently large \(\gamma \). In particular, we show that the numerical solution of (1.1) uniformly converges in probability to 1, \(-1\) in the interior and exterior of the geometric interface of the deterministic Mullins–Sekerka problem (5.1), respectively. As a consequence we obtain uniform convergence of the zero level set of the numerical solution to the geometric interface of the Mullins–Sekerka problem in probability; cf. Corollary 5.8.

The error analysis below in particular identifies proper balancing strategies of numerical parameters with the interface width that allow to approximate the limiting sharp interface model for realistic problem setups, and motivates the use of space–time adaptive meshes for numerical simulations; see e.g. [25]. In Sect. 6, we present computational studies which evidence asymptotic properties of the solution for different scalings of the noise term. Our studies suggest the deterministic Mullins–Sekerka problem as sharp-interface limit already for \(\gamma \ge 1\); we observe this in simulations for spatially colored, as well as for the space–time white noise. In contrast, corresponding simulations for \(\gamma = 0\) indicate that the sharp-interface limit is a stochastic version of the Mullins–Sekerka problem; see Sect. 6.4.

To sum up, the convergence analysis presented in this paper is a combination of a perturbation and discretization error analysis. The latter depends on stability properties of the proposed numerical scheme: higher-moment energy estimates for the Scheme 3.1, a discrete spectral estimate for the related deterministic variant, and a local error analysis on the sample set \(\Omega \) are crucial ingredients of our approach. The techniques developed in this paper constitute a general framework which can be used to treat different and/or more general phase-field models including the stochastic Allen-Cahn equation, and apply to settings which involve multiplicative noise, driving trace-class Hilbert-space-valued Wiener processes, and bounded polyhedral domains \({{\mathcal {D}}} \subset {{\mathbb {R}}}^3\), as well.

The paper is organized as follows. Section 2 is dedicated to the analysis of the continuous problem. The time discretization Scheme 3.1 is proposed in Sect. 3 and rates of convergence are shown, while Sect. 4 extends this convergence analysis to its finite-element discretization. The convergence of the numerical discretization to the sharp-interface limit is studied in Sect. 5. Section 6 contains the details of the implementation of the numerical schemes for the stochastic Cahn–Hilliard and the stochastic Mullins–Sekerka problem, respectively, as well as computational experiments which complement the analytical results.

2 The stochastic Cahn–Hilliard equation

2.1 Notation

For \(1\le p \le \infty \), we denote by \(\bigl ( {\mathbb {L}}^p, \Vert \cdot \Vert _{{\mathbb {L}}^p}\bigr )\) the standard spaces of p-th order integrable functions on \({\mathcal {D}}\). By \((\cdot ,\cdot )\) we denote the \({\mathbb {L}}^2\)-inner product, and let \(\Vert \cdot \Vert = \Vert \cdot \Vert _{{\mathbb {L}}^2}\). For \(k\in {\mathbb {N}}\) we write \(\bigl ({\mathbb {H}}^k, \Vert \cdot \Vert _{{\mathbb {H}}^k}\bigr )\) for usual Sobolev spaces on \({\mathcal {D}}\), and \({\mathbb {H}}^{-1} = ({\mathbb {H}}^1)^\prime \). We define \({\mathbb {L}}^2_0 := \{ \phi \in {\mathbb {L}}^2; \,\, \int _{\mathcal {D}} \phi \,\mathrm {d}x= 0\}\), and for \(v \in {\mathbb {L}}^2\) we denote its zero mean counterpart as \({\overline{v}} \in {\mathbb {L}}^2_0\), i.e., \({\overline{v}} := v - \frac{1}{|{\mathcal {D}}|}\int _{{\mathcal {D}}}v\,\mathrm {d}x\). We frequently use the isomorphism \((-\Delta )^{-1}: {\mathbb {L}}^2_0 \rightarrow {{\mathbb {H}}^2} \cap {\mathbb {L}}^2_0\), where \({w} = (-\Delta )^{-1}{\overline{v}}\) is the unique solution of

$$\begin{aligned} -\Delta {w} = {\overline{v}} \quad \mathrm {in}\,\, {\mathcal {D}}, \qquad \displaystyle \partial _{n}{w} = 0 \quad \mathrm {on}\,\, \partial {\mathcal {D}}. \end{aligned}$$

In particular, \((\nabla (-\Delta )^{-1}{\overline{v}}, \nabla \varphi ) = ({\overline{v}}, \varphi )\) for all \(\varphi \in {\mathbb {H}}^1\), \({\overline{v}}\in {\mathbb {L}}^2_0\). Below, we denote \(\Delta ^{-1/2} {\overline{v}}:= \nabla (-\Delta )^{-1}{\overline{v}}\) and note that norms \(\Vert {\overline{v}} \Vert _{{\mathbb {H}}^{-1}}\) and \( \Vert \Delta ^{-1/2}{\overline{v}} \Vert \) are equivalent for all \({\overline{v}}\in {\mathbb {L}}^2_0\). Throughout the paper, C denotes a generic positive constant that may depend on \({\mathcal {D}}\), T, but is independent of \(\varepsilon \).

2.2 The problem

We recall the definition of a strong variational solution of the stochastic Cahn–Hilliard equation (1.1); its existence, uniqueness, and regularity properties have been obtained in [14, Thm. 8.2], [13, Prop. 2.2].

Definition 2.1

Let \(u_0^\varepsilon \in L^2(\Omega , {\mathcal {F}}_0, {\mathbb {P}}; {\mathbb {H}}^1) \cap L^4(\Omega , {\mathcal {F}}_0, {\mathbb {P}}; {\mathbb {L}}^4)\) and denote \(\underline{{\mathbb {H}}}^2 = \{\varphi \in {\mathbb {H}}^2,\,\, \partial _{n}\varphi = 0\,\,\mathrm {on}\,\, \partial {\mathcal {D}} \}\). Then, the process

$$\begin{aligned}&u\in L^2\bigl (\Omega , \{ {\mathcal {F}}_t\}_t, {\mathbb {P}}; C([0,T]; {\mathbb {H}}^1)\cap L^2(0,T; \underline{{\mathbb {H}}}^2)\bigr ) \\&\quad \cap L^4\bigl (\Omega , \{ {\mathcal {F}}_t\}_t, {\mathbb {P}}; C([0,T]; {\mathbb {L}}^4)\bigr ) \end{aligned}$$

is called a strong solution of (1.1) if it satisfies \({\mathbb {P}}\)-a.s. and for all \(0 \le t \le T\)

$$\begin{aligned} \bigl (u(t), \varphi \bigr )= & {} (u_0^\varepsilon , \varphi ) + \int _0^t \Big (-\varepsilon \Delta u + \frac{1}{\varepsilon } f(u), \Delta \varphi \Big )\mathrm {d}s \\&+ \varepsilon ^\gamma \int _0^t (\varphi ,g) \, \mathrm {d}W(s) \quad \forall \varphi \in \underline{{\mathbb {H}}}^2. \end{aligned}$$

The following lemma establishes existence and bounds for the strong solution u of (1.1) and for the chemical potential w from (1.2b); cf. [13, Section 2.3] for a proof of (i), while (ii) follows similarly as part (i) by the Itô formula and the Burkholder-Davis-Gundy inequality.

Lemma 2.1

Let \(T>0\). There exists a unique strong solution u of (1.1), and there hold

  1. (i)

       \( \displaystyle {\mathbb {E}}\big [ {\mathcal {E}}\bigl (u(t)\bigr )\big ] + {\mathbb {E}}\Big [ \int _0^t\Vert \nabla w(s)\Vert ^2\, \mathrm {d}s\Big ] \le C \big ( {\mathcal {E}}(u_0^\varepsilon ) + 1\big ) \qquad \forall \, t\in [0,T]\, , \)

  2. (ii)

       For any \(p\in {\mathbb {N}}\) there exists \(C\equiv C(p)>0\) such that    

    $$\begin{aligned}\displaystyle {\mathbb {E}}\big [ \sup _{t\in [0,T]} {\mathcal {E}}\bigl (u(t) \bigr )^p\big ] \le C\bigl ( {\mathcal {E}}(u_0^\varepsilon )^p + 1\bigr )\, . \end{aligned}$$

2.3 Spectral estimate

We denote by \(u_{\texttt {CH}}: {\mathcal {D}}_T \rightarrow {{\mathbb {R}}}\) the solution of the deterministic Cahn–Hilliard equation, i.e., (1.1) with \(g \equiv 0\). Let \(\varepsilon _0 \ll 1\); throughout the paper we assume that for every \(\varepsilon \in (0, \varepsilon _0)\), there exists an arbitrarily close approximation \(u_{\texttt {A}}\in C^2(\overline{{\mathcal {D}}}_T)\) of \(u_{\texttt {CH}}\) which satisfies the spectral estimate (cf. [1, relation (2.3)])

$$\begin{aligned} \inf _{0\le t\le T}\inf _{\psi \in {\mathbb {H}}^{1}, \; w=(-\Delta )^{-1}\psi } \frac{\varepsilon \Vert \nabla \psi \Vert ^2+\frac{1}{\varepsilon }\bigl (f'(u_{\texttt {A}})\psi ,\psi \bigr )}{\Vert \nabla w\Vert ^2}\ge -C_0\, , \end{aligned}$$
(2.1)

where the constant \(C_0 >0\) does not depend on \(\varepsilon >0\); cf. [1, 2, 11].

2.4 Error bound between u of (1.1) and \(u_{\texttt {CH}}\) of (1.1) with \(g \equiv 0\).

In [3] the authors study the convergence of the solution of the stochastic Cahn–Hilliard equation (1.1) to the deterministic sharp-interface limit. In particular, they show the convergence in probability of the solution u of (1.1) to the approximation \(u_{\texttt {A}}\) of \(u_{\texttt {CH}}\) for sufficiently large \(\gamma >0\). Apart from the spectral estimate (2.1), a central ingredient of their analysis is the use of a stopping time argument to control the drift nonlinearity. The stopping time which, in our setting, is defined as

$$\begin{aligned} T_\varepsilon :=\inf \Big \{ t\in [0,T]:\,\,\frac{1}{\varepsilon }\int _0^t\Vert u(s)-u_{\texttt {CH}}(s)\Vert _{{\mathbb {L}}^3}^3\, {\mathrm{d}}s >\varepsilon ^{\sigma _0}\Big \} \end{aligned}$$

for some constant \(\sigma _0>0\), enables the derivation of the estimates in Lemma 2.2 below up to the stopping time \(T_\varepsilon \) on a large sample subset

$$\begin{aligned} \Omega _1 := \Big \{ \omega \in \Omega :\,\, \varepsilon ^\gamma \sup _{t\in [0,T_\varepsilon ]}\Big |\int _0^{t}\big ( u(s)-u_{\texttt {CH}}(s), (-\Delta )^{-1}g\, \mathrm {d}W(s)\big )\Big | \le \varepsilon ^{\kappa _0}\Big \} \end{aligned}$$

that satisfies \({\mathbb {P}}[\Omega _1] \rightarrow 1\) for \(\varepsilon \downarrow 0\), for some constant \(\kappa _0\). On specifying the condition (A) below it can be shown that \(T_\varepsilon \equiv T\), which yields Lemma 2.2. In this section we extend the work [3] by showing a strong error estimate for \(u-u_{\texttt {CH}}\) in Lemma 2.3.

In Sect. 3 we perform an analogous analysis on the discrete level by using a stopping index \(J_\varepsilon \), and a set \(\Omega _2\) which are discrete counterparts of \(T_\varepsilon \) and \(\Omega _1\), respectively. Both approaches require a lower bound for the noise strength \(\gamma \) to ensure, in particular, positive probability of the sets \(\Omega _1\) and \(\Omega _2\), respectively.

For the analysis in this section we require the following assumptions to hold.

(A):

Let \({\mathcal {E}}(u^\varepsilon _0) \le C\). Assume that the triplet \((\sigma _0, \kappa _0, \gamma ) \in \bigl [{\mathbb {R}}^+\bigr ]^3\) satisfies

$$\begin{aligned} {\sigma _0> 12\,, \qquad \sigma _0> \kappa _0> \frac{2}{3}\sigma _0 + 4\,, \qquad \gamma > \max \big \{ \frac{23}{3}, \frac{\kappa _0}{2}\big \}}\, . \end{aligned}$$

Assumption (A) ensures positivity of all exponents in the estimates in the lemmas of this section. The following lemma relies on the spectral estimate (2.1) and is a consequence of [3, Theorem 3.10] for \(p=3\), \(d=2\), where a slightly different notational setup is used.

Lemma 2.2

Suppose \(\mathbf{(A)}\). There exists \(\varepsilon _0 \equiv \varepsilon _0(\sigma _0, \kappa _0) >0\) such that for any \(\varepsilon \le \varepsilon _0\) and sufficiently large \({\mathfrak {l}}>0\)

$$\begin{aligned} \mathrm{(i)}&{{\mathbb {P}}}\bigl [\Vert u -u_{\texttt {A}}\Vert ^2_{L^\infty (0,T; {\mathbb {H}}^{-1})} \le {C} {\varepsilon ^{\kappa _0}}\bigr ] \ge 1- C \varepsilon ^{(\gamma + \frac{\sigma _0+1}{3} - \kappa _{0}){\mathfrak {l}}}\,,\\ \mathrm{(ii)}&{{\mathbb {P}}}\bigl [\varepsilon \Vert \nabla [u -u_{\texttt {A}}]\Vert ^2_{L^2(0,T; {\mathbb {L}}^{2})} \le {C} {\varepsilon ^{\frac{2\sigma _0}{3}}} \bigr ] \ge 1- C \varepsilon ^{(\gamma + \frac{\sigma _0+1}{3} - \kappa _{0}){\mathfrak {l}}}\, , \end{aligned}$$

where \({\mathfrak {l}}\) and \(C \equiv C({\mathfrak {l}})>0\) are independent of \(\gamma \), \(\sigma _0\), \(\kappa _0\) and \(\varepsilon \).

A closer inspection of the proofs in [3] (cf. [3, Lemma 4.3] in particular) reveals that the parameter \({\mathfrak {l}}\) can be chosen arbitrarily large in the above theorem.

We now use Lemma 2.2 to show bounds for the difference \(u-u_{\texttt {CH}}\) in different norms.

Lemma 2.3

Suppose (A), and \(\varepsilon \le \varepsilon _0\), for \(\varepsilon _0 \equiv \varepsilon _0(\sigma _0, \kappa _0)>0\) sufficiently small. There exists \(C>0\) such that

$$\begin{aligned} \begin{aligned} {\mathbb {E}} \Big {[} \Vert u -u_{\texttt {CH}}\Vert _{L^\infty (0,T;{\mathbb {H}}^{-1})}^2 + {\varepsilon \Vert \nabla [u -u_{\texttt {CH}}]\Vert ^2_{L^2(0,T; {\mathbb {L}}^{2})}} \Big {]} \le C\varepsilon ^{\frac{2\sigma _0}{3}}\, . \end{aligned} \end{aligned}$$

Proof

By [1, Theorem 2.1] (see also [1, Theorem 4.11 and Remark 4.6]) there exists \(u_{\texttt {A}} \in C^2(\overline{{\mathcal {D}}}_T) {\cap {{\mathbb {L}}}^2_0}\) which satisfies (2.1) and

$$\begin{aligned} \Vert u_{\texttt {A}}-u_{\texttt {CH}}\Vert _{L^\infty (0,T;{\mathbb {H}}^{-1})}^2 + {\Vert u_{\texttt {A}}-u_{\texttt {CH}}\Vert _{L^2(0,T;{{\mathbb {H}}}^{1})}^2} \le C\varepsilon ^{2\gamma }\,, \end{aligned}$$
(2.2)

and, cf. [1, Theorem 2.3],

$$\begin{aligned} \Vert u_{\texttt {A}}-u_{\texttt {CH}}\Vert _{C^1({\mathcal {D}}_T)} \le C\varepsilon \,. \end{aligned}$$
(2.3)

By using the energy bound for \(u_{\texttt {CH}}\) and (2.3) we get \(\Vert {{u}_{\texttt {A}}}\Vert _{L^\infty (0,T;{\mathbb {H}}^{1})} \le C\).

Consider the subset \({\widetilde{\Omega }}_1 \subset \Omega \) (cf. [3, Lemma 4.5, Lemma 4.6]),

$$\begin{aligned} {\widetilde{\Omega }}_1 := \displaystyle \big \{ \omega \in \Omega :\, \Vert u-u_{\texttt {A}}\Vert ^2_{L^\infty (0,T, {\mathbb {H}}^{-1})} + {\varepsilon \Vert \nabla [u -u_{\texttt {A}}]\Vert ^2_{L^2(0,T; {\mathbb {L}}^{2})} \le {C}\varepsilon ^{\frac{2\sigma _0}{3}} } \big \}\,. \end{aligned}$$

By Lemma 2.2, (ii), we have \({\mathbb {P}}[{\widetilde{\Omega }}_1^c] \le C\varepsilon ^{\big (\gamma + \frac{\sigma _0+1}{3} - \kappa _{0}\big ){\mathfrak {l}}} <1\), for sufficiently large \({\mathfrak {l}}>0\). Then using Lemma 2.1, (ii) and (2.3), we estimate the error

$$\begin{aligned} {{\texttt {Err}}_{\texttt {A}}} :=\Vert u-u_{\texttt {A}}\Vert _{L^\infty (0,T;{\mathbb {H}}^{-1})}^2 + {\varepsilon \Vert \nabla [u -u_{\texttt {A}}]\Vert ^2_{L^2(0,T; {\mathbb {L}}^{2})} }\, , \end{aligned}$$

as

$$\begin{aligned} \begin{aligned} {\mathbb {E}}\big [{\texttt {Err}}_{\texttt {A}}\big ]&=\int _{\Omega }\mathbb {1}_{{\widetilde{\Omega }}_1}{\texttt {Err}}_{\texttt {A}} \, {\mathrm{d}}\omega + \int _{\Omega }\mathbb {1}_{{\widetilde{\Omega }}_1^c}{\texttt {Err}}_{\texttt {A}} \, {\mathrm{d}}\omega \\&\le {C}{\varepsilon ^{\frac{2\sigma _0}{3}}} + C {\bigl ( {{\mathbb {P}}}[ {{\widetilde{\Omega }}_1^c}] \bigr )^{1/2}} \Bigl ({\mathbb {E}}\Big [\sup _{[0,T]}{\mathcal {E}}\bigl (u(t)\bigr )^2\Big ] + \Vert u_{\texttt {A}}\Vert _{L^\infty (0,T;{\mathbb {H}}^{1})}^2 \Bigr )^{1/2}\\&\le C\bigl ( {\varepsilon ^{\frac{2\sigma _0}{3}}} + {\varepsilon ^{(\gamma + \frac{\sigma _0+1}{3} - \kappa _{0})\frac{{\mathfrak {l}}}{2}}} \bigr ) \, . \end{aligned} \end{aligned}$$

It is due to (A) that \(\gamma + \frac{\sigma _0+1}{3} - \kappa _{0} > 0\). We now choose \({\mathfrak {l}}\) sufficiently large such that \(\big (\gamma + \frac{\sigma _0+1}{3} - \kappa _{0}\big )\frac{{\mathfrak {l}}}{2} > \frac{2}{3}\sigma _0\) and the statement follows from the estimate for \({\texttt {Err}}_{\texttt {A}}\) and (2.2) by the triangle inequality. \(\square \)

3 A time discretization Scheme for (1.1)

For fixed \(J \in {{\mathbb {N}}}\), let \(0=t_0<t_1<\cdots <t_J=T\) be an equidistant partition of [0, T] with step size \(k = \frac{T}{J}\), and \(\Delta _j W := W(t_j) - W(t_{j-1})\), \(j=1,\dots , J\). We approximate (1.1) by the following scheme:

Scheme 3.1

For every \(1 \le j \le J\), find a \([{{\mathbb {H}}}^1]^2\)-valued r.v. \((X^j, w^j)\) such that \({{\mathbb {P}}}\)-a.s.

$$\begin{aligned} \begin{aligned}&(X^j-X^{j-1},\varphi )+k(\nabla w^{j},\nabla \varphi )=\varepsilon ^{\gamma }\bigl (g,\varphi \bigr )\Delta _j W \;\;\;\; \, \, \quad \, \forall \, \varphi \in {{\mathbb {H}}}^1\,,\\&\varepsilon (\nabla X^j,\nabla \psi )+\frac{1}{\varepsilon } \bigl (f(X^j),\psi \bigr )=(w^j,\psi ) \qquad \qquad \quad \quad \ \, \forall \, \psi \in {{\mathbb {H}}}^1\, ,\\&X^0 =u_0^\varepsilon \in {{\mathbb {H}}}^1\, . \end{aligned} \end{aligned}$$

The solvability and uniqueness of \(\{(X^j, w^j)\}_{j\ge 1}\), as well as the \({\mathbb {P}}\)-a.s. conservation of mass of \(\{X^j\}_{j\ge 1}\) are immediate.

For the error analysis of Scheme 3.1, we use the iterates \(\bigl \{ (X^{j}_{\texttt {CH}}, w^{j}_{\texttt {CH}})\bigr \}_{j=0}^J \subset \bigl [ {{\mathbb {H}}}^1\bigr ]^2\) which solve Scheme 3.1 for \(g \equiv 0\). The following lemma collects the properties of these iterates from [16, 17]. We remark that, compared to [16, 17], the results are stated in a simplified (but equivalent) form, which is more suitable for the subsequent analysis.

Lemma 3.1

Suppose \({{\mathcal {E}}}(u^{\varepsilon }_0) \le C\). Let \(\bigl \{ (X^{j}_{\texttt {CH}}, w^{j}_{\texttt {CH}})\bigr \}_{j=0}^J \subset \bigl [ {{\mathbb {H}}}^1\bigr ]^2\) be the solution of Scheme 3.1 for \(g \equiv 0\). For every \(0<\beta < \frac{1}{2}\), \(\varepsilon \in (0, \varepsilon _0) \), \(k \le \varepsilon ^3\), and \({{\mathfrak {p}}}_{\texttt {CH}} >0\), there exist \({{\mathfrak {m}}}_{\texttt {CH}}, {{\mathfrak {n}}}_{\texttt {CH}}, C>0\), and \({{\mathfrak {l}}}_{\texttt {CH}} \ge 3\) such that

$$\begin{aligned} \ \quad \quad \mathrm{(i)}&\quad&\displaystyle \max _{1 \le j \le J}{\mathcal E}(X^j_{\texttt {CH}}) \le {{\mathcal {E}}}(u_0^{\varepsilon })\,. \quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \, \end{aligned}$$

Assume moreover \(\Vert u_0^{\varepsilon }\Vert _{{\mathbb {H}}^2} \le C\varepsilon ^{-{\mathfrak {p}}_{\texttt {CH}}}\), then

$$\begin{aligned} \ \quad \quad \mathrm{(ii)}&\quad&\displaystyle \max _{1 \le j \le J} \Vert X^{j}_{\texttt {CH}}\Vert _{{\mathbb {H}}^2} \le C \varepsilon ^{-{\mathfrak {n}}_{\texttt {CH}}}\,, \quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \,\,\, \\ \ \quad \quad \mathrm{(iii)}&\quad&\displaystyle \max _{1 \le j \le J} \Vert X^{j}_{\texttt {CH}}\Vert _{{\mathbb {L}}^\infty } \le C\, \quad \mathrm {for} \quad k \le C \varepsilon ^{{{\mathfrak {l}}}_{\texttt {CH}}}. \qquad \qquad \qquad \qquad \,\,\, \end{aligned}$$

Assume in addition \(\Vert u_0^{\varepsilon }\Vert _{{\mathbb {H}}^3} \le C\varepsilon ^{-{\mathfrak {p}}_{\texttt {CH}}}\). Then for \(k \le C \varepsilon ^{{{\mathfrak {l}}}_{\texttt {CH}}}\), and \(C_0 >0\) from (2.1) it holds

$$\begin{aligned} {\mathrm{(iv)}} \quad&\displaystyle \max _{1 \le j \le J} \Vert u_{\texttt {CH}}(t_j) - X^j_{\texttt {CH}}\Vert ^2_{{{\mathbb {H}}}^{-1}} + \sum _{j=1}^J k^{1+\beta } \big \Vert \nabla \bigl [u_{\texttt {CH}}(t_j) - X^j_{\texttt {CH}} \bigr ]\big \Vert ^2 \le C \frac{k^{{2}-\beta }}{ \varepsilon ^{{{\mathfrak {m}}}_{\texttt {CH}}}}\, , \\ {\mathrm{(v)}} \quad&\displaystyle \inf _{0\le t\le T}\inf _{\psi \in {\mathbb {H}}^{1}, \; w=(-\Delta )^{-1}\psi } \frac{\varepsilon \Vert \nabla \psi \Vert ^2+\frac{1-\varepsilon ^3}{\varepsilon }\bigl (f'(X^j_{\texttt {CH}})\psi ,\psi \bigr )}{\Vert \nabla w\Vert ^2}\ge -(1-\varepsilon ^3)(C_0+1). \end{aligned}$$

Proof

The proof of (i), (ii), (iv), (v) is a direct consequence of [16, Lemma 3, Corollary 1, Proposition 2].

To show (iii), we use the Gagliardo–Nirenberg inequality and [16, inequality (76)], (ii), (iv) to get the following \({\mathbb {L}}^{\infty }\)-error estimate for \(k \le C\varepsilon ^{{\mathfrak {l}}_{\texttt {CH}}}\), and some \({\mathfrak {l}}_{\texttt {CH}}>0\),

$$\begin{aligned} \max _{1 \le j \le J}\Vert X_{\texttt {CH}}^j - u_{\texttt {CH}}(t_j)\Vert _{{\mathbb {L}}^\infty } \le \varepsilon ^2\, . \end{aligned}$$

Hence, \(\Vert X_{\texttt {CH}}^j\Vert _{{\mathbb {L}}^\infty }\le C\) since \(\Vert u_{\texttt {CH}}\Vert _{{\mathbb {L}}^\infty }\le C\); cf. [1, proof of Theorem. 2.3] and [17, Lemma 2.2]. \(\square \)

The numerical solution of Scheme 3.1 satisfies the discrete counterpart of the energy estimate in Lemma 2.1, (i). The time-step constraint in the lemma below is a consequence of the implicit treatment of the nonlinearity; see the last term in (3.2), its estimate (3.3), and (3.4); the lower bound for admissible \(\gamma \) has the same origin.

Lemma 3.2

Let \(\gamma > \frac{3}{2}\), \(\varepsilon \in (0,\varepsilon _0)\) and \(k \le \varepsilon ^3\). Then the solution of Scheme 3.1 conserves mass along every path \(\omega \in \Omega \), and there exists \(C > 0\) such that

  1. (i)

       \(\displaystyle \max _{1\le j\le J} {\mathbb {E}}\bigl [ {{\mathcal {E}}}(X^j)\bigr ] + \frac{k}{2} \sum _{i=1}^J{\mathbb {E}}\bigl [\Vert \nabla w^i\Vert ^2\bigr ] \le C \,\bigl ( {{{\mathcal {E}}}(u^\varepsilon _0)} +1\bigr )\,,\)

  2. (ii)

       \(\displaystyle {\mathbb {E}}\big [\max _{1\le j\le J}{{\mathcal {E}}}(X^j) \big ] \le C \bigl ( {{\mathcal {E}}}(u^\varepsilon _0) + 1\bigr )\,.\)

For every \(p = 2^r\), \(r \in {{\mathbb {N}}}\), there exists \(C \equiv C(p, T) > 0\) such that

  1. (iii)

       \( \displaystyle \max _{1\le j\le J} {\mathbb {E}}\bigl [ \vert {\mathcal E}(X^j)\vert ^p\bigr ] \le C \displaystyle \bigl ( \vert {\mathcal E}(u^\varepsilon _0)\vert ^p +1\bigr )\, ,\)

  2. (iv)

       \( \displaystyle {\mathbb {E}}\big [ \max _{1\le j\le J} \vert {\mathcal E}(X^j)\vert ^p\big ] \le C \bigl ( \vert {\mathcal E}(u^\varepsilon _0)\vert ^p +1\bigr )\, .\)

Proof

i) For \(\omega \in \Omega \) fixed, we choose \(\varphi =w^j(\omega )\) and \(\psi =[X^j-X^{j-1}](\omega )\) in Scheme 3.1. Adding both equations then leads to \({\mathbb {P}}\)-a.s.

$$\begin{aligned} \begin{array}{lll} \displaystyle \frac{\varepsilon }{2} \Vert \nabla X^j\Vert ^2 - \frac{\varepsilon }{2} \Vert \nabla X^{j-1}\Vert ^2 + \frac{\varepsilon }{2} \Vert \nabla [X^j - X^{j-1}]\Vert ^2 + k\Vert \nabla w^j\Vert ^2 \\ \displaystyle \qquad + \frac{1}{\varepsilon }\bigl (f(X^j), X^j-X^{j-1}\bigr ) = \varepsilon ^\gamma (g, w^j)\Delta _jW\, . \end{array} \end{aligned}$$
(3.1)

Note that the third term on the left-hand side reflects the numerical dissipativity in the scheme. We can estimate the nonlinear term as (cf. [15, Section 3.1]),

$$\begin{aligned} \begin{aligned} \bigl (f(X^j), X^j-X^{j-1} \bigr )&\ge \frac{1}{4} \Vert {{\mathfrak {f}}}(X^j)\Vert ^2-\frac{1}{4}\Vert {{\mathfrak {f}}}(X^{j-1})\Vert ^2 \\&+ \frac{1}{4}\Vert {{\mathfrak {f}}}(X^{j})-{{\mathfrak {f}}}(X^{j-1})\Vert ^2 - \frac{1}{2}\Vert X^j-X^{j-1}\Vert ^2\, , \end{aligned} \end{aligned}$$
(3.2)

where we employ the notation \({{\mathfrak {f}}}(u) := |u|^2 -1\), i.e., \(f(X^j)= {{\mathfrak {f}}}(X^j)X^j\). The third term on the right-hand side again reflects numerical dissipativity.

By \(\omega \in \Omega \) fixed, and \(\varphi = (-\Delta )^{-1}[X^j-X^{j-1}](\omega )\) in Scheme 3.1, we eventually have \({{\mathbb {P}}}\)-a.s.,

$$\begin{aligned} \Vert \Delta ^{-1/2}[X^j-X^{j-1}]\Vert ^2\le \Big (k\Vert \nabla w^j\Vert + \varepsilon ^{\gamma }\Vert \Delta ^{-1/2} g\Vert |\Delta _j W |\Big )\Vert \Delta ^{-1/2}[X^j-X^{j-1}]\Vert \, , \end{aligned}$$

which together with \(\Vert \Delta ^{-1/2} g\Vert \le C\) yields the estimate

$$\begin{aligned} \Vert \Delta ^{-1/2}[X^j-X^{j-1}]\Vert ^2\le 2k^2\Vert \nabla w^j\Vert ^2 + C\varepsilon ^{2\gamma }|\Delta _j W|^2\, . \end{aligned}$$

Hence, using this estimate, and exploiting again the inherent numerical dissipation of the scheme we can estimate

$$\begin{aligned} \begin{aligned} \frac{1}{2\varepsilon }\Vert X^j-X^{j-1}\Vert ^2&=\frac{1}{2\varepsilon }\bigl (\nabla (-\Delta )^{-1}[X^j-X^{j-1}], \nabla [X^j-X^{j-1}] \bigr ) \\&\le \frac{1}{4\varepsilon ^3}\Vert \Delta ^{-1/2}[{X^j-X^{j-1}}]\Vert ^2 + \frac{\varepsilon }{4}\Vert \nabla [X^j-X^{j-1}]\Vert ^2 \\&\le \frac{k^2}{2 \varepsilon ^3}\Vert \nabla w^j\Vert ^2 + C\varepsilon ^{2\gamma -3}|\Delta _j W|^2 + \frac{\varepsilon }{4}\Vert \nabla [X^j-X^{j-1}]\Vert ^2\, . \end{aligned} \end{aligned}$$
(3.3)

We substitute (3.2) along with the last inequality into (3.1) and get

$$\begin{aligned} \begin{aligned}&\frac{\varepsilon }{2} \bigl (\Vert \nabla X^j\Vert ^2 - \Vert \nabla X^{j-1}\Vert ^2\bigr ) + \frac{\varepsilon }{4} \Vert \nabla [X^j - X^{j-1}]\Vert ^2 \\&\quad \quad +\frac{1}{4\varepsilon } \Bigl (\Vert {{\mathfrak {f}}}(X^j)\Vert ^2- \Vert {\mathfrak f}(X^{j-1})\Vert ^2 + \Vert {{\mathfrak {f}}}(X^{j})-{\mathfrak f}(X^{j-1})\Vert ^2\Bigr ) \\&\quad \quad + \big (k-\frac{k^2}{2 \varepsilon ^3}\big )\Vert \nabla w^j\Vert ^2 \\&\quad \le \varepsilon ^\gamma (g, w^j)\Delta _jW + C\varepsilon ^{2\gamma -3}|\Delta _j W|^2\, , \end{aligned} \end{aligned}$$
(3.4)

which motivates time-steps \(k < 2 \varepsilon ^3\). Next, by using the second equation in Scheme 3.1, we can rewrite the first term on the right-hand side as

$$\begin{aligned} \begin{aligned} \varepsilon ^{\gamma } \bigl ( g, w^j\bigr ) \Delta _j W&= \varepsilon ^{\gamma +1} \Bigl [\bigl ( \nabla [X^j-X^{j-1}], \nabla g\bigr ) + \bigl ( \nabla X^{j-1}, \nabla g\bigr ) \Bigr ]\Delta _j W \\&\qquad + \varepsilon ^{\gamma -1} \Bigl [ \bigl ( f(X^j) - f(X^{j-1}), g\bigr ) + \bigl ( f(X^{j-1}), g\bigr ) \Bigr ]\Delta _j W\, \\&=: A_1+A_2+A_3+A_4\, . \end{aligned} \end{aligned}$$
(3.5)

Note that \({\mathbb {E}}[A_2]= {\mathbb {E}}[A_4] = 0\). Next, we obtain

$$\begin{aligned} \begin{aligned}&A_1 = \varepsilon ^{\gamma +1}\bigl ( \nabla [X^j-X^{j-1}], \nabla g\bigr ) \Delta _j W \\&\quad \le \frac{\varepsilon }{8}\Vert \nabla [X^j-X^{j-1}]\Vert ^2 + C\varepsilon ^{2\gamma +1}\Vert \nabla g\Vert ^2|\Delta _j W|^2 \\&\quad \le \frac{\varepsilon }{8}\Vert \nabla [X^j-X^{j-1}]\Vert ^2 + C\varepsilon ^{2\gamma +1}|\Delta _j W|^2\, . \end{aligned} \end{aligned}$$
(3.6)

On recalling \( f(X^j) = {{\mathfrak {f}}} (X^j) X^j \), we rewrite the remaining term as

$$\begin{aligned} \begin{aligned} A_3&= \varepsilon ^{\gamma -1} \bigl ( f(X^j) - f(X^{j-1}), g\bigr )\Delta _j W \\&= \varepsilon ^{\gamma -1} \Big ( \big [ {{\mathfrak {f}}}(X^j) - {{\mathfrak {f}}}(X^{j-1})\big ] X^{j}, g\Big )\Delta _j W \\&\quad + \varepsilon ^{\gamma -1} \Big ( {\mathfrak f}(X^{j-1})\big [X^{j}-X^{j-1}\big ], g\Big )\Delta _j W \\&=: A_{3,1}+A_{3,2}\, . \end{aligned} \end{aligned}$$
(3.7)

Thanks to the embeddings \({{\mathbb {L}}}^s \hookrightarrow {\mathbb {L}}^r\) (\(r \le s\)), and the Cauchy-Schwarz and Young’s inequalities,

$$\begin{aligned} \begin{aligned} A_{3,1}&\le \frac{1}{16\varepsilon }\Vert {{\mathfrak {f}}}(X^j) - {\mathfrak f}(X^{j-1})\Vert ^2 + C\varepsilon ^{2\gamma -1}\Vert |X^{j}|^2 \Vert _{{\mathbb {L}}^1}\Vert g\Vert ^2_{{\mathbb {L}}^\infty } |\Delta _j W|^2 \\&\le \frac{1}{16\varepsilon }\Vert {{\mathfrak {f}}}(X^j) - {\mathfrak f}(X^{j-1})\Vert ^2 + C\varepsilon ^{2\gamma -1}\Big (\Vert {{\mathfrak {f}}}(X^j) - {{\mathfrak {f}}}(X^{j-1})\Vert _{{\mathbb {L}}^1} + \Vert X^{j-1} \Vert ^2\Big ) |\Delta _j W|^2 \\&\le \frac{1}{8\varepsilon }\Vert {{\mathfrak {f}}}(X^j) - {\mathfrak f}(X^{j-1})\Vert ^2 + C\varepsilon ^{4\gamma -1}|\Delta _j W|^4 + C\varepsilon ^{2\gamma -1}\Big (\Vert {\mathfrak f}(X^{j-1})\Vert ^2+1\Big )|\Delta _j W|^2\,. \end{aligned} \end{aligned}$$

The leading term may now be controlled by the numerical dissipation term in (3.2). Finally, by the Poincaré’s inequality, we estimate

$$\begin{aligned} \begin{aligned} A_{3,2}&\le \Vert {\mathfrak f}(X^{j-1})\Vert ^2\Vert g\Vert ^2_{{\mathbb {L}}^\infty }|\Delta _j W|^2 + \varepsilon ^{2\gamma -2}\Vert X^{j}-X^{j-1}\Vert ^2 \\&\le C \Vert {{\mathfrak {f}}}(X^{j-1})\Vert ^2|\Delta _j W|^2 + C_{{\mathcal {D}}}\varepsilon ^{2\gamma -2}\Vert \nabla [X^{j}-X^{j-1}]\Vert ^2\, . \end{aligned} \end{aligned}$$

By combining the above estimates for \(A_{3,1}\), \(A_{3,2}\) we obtain an estimate for (3.7).

Next, we insert the estimates (3.5), (3.6), and (3.7) into (3.4), account for \(2\gamma - 2 < 1\), sum the resulting inequality over j and take expectations,

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}\bigl [\frac{\varepsilon }{2} \Vert \nabla X^j\Vert ^2 +\frac{1}{4\varepsilon } \Vert {{\mathfrak {f}}}(X^j)\Vert ^2 \bigr ] + \frac{1}{8\varepsilon }\sum _{i=1}^j{\mathbb {E}}\bigl [\Vert {\mathfrak f}(X^{i})-{{\mathfrak {f}}}(X^{i-1})\Vert ^2\bigr ] \\&\qquad + \Big (\frac{\varepsilon }{8} - C_{{\mathcal {D}}}\varepsilon ^{2\gamma -2}\Big )\sum _{i=1}^j{\mathbb {E}}\bigl [\Vert \nabla [X^i - X^{i-1}]\Vert ^2\bigr ] + \big (k-\frac{k^2}{2 \varepsilon ^3}\Big )\sum _{i=1}^j{\mathbb {E}}\bigl [\Vert \nabla w^i\Vert ^2\bigr ] \\&\quad \le {\mathbb {E}}\bigl [\frac{\varepsilon }{2}\Vert \nabla X^0\Vert ^2 +\frac{1}{4\varepsilon } \Vert {{\mathfrak {f}}}(X^0)\Vert ^2\bigr ] + CT\bigl (\varepsilon ^{4\gamma -1}k + \varepsilon ^{2\gamma +1} + \varepsilon ^{2\gamma -1} + \varepsilon ^{2\gamma -3}\bigr ) \\&\qquad + C(1+\varepsilon ^{2\gamma -1}){k}\sum _{i=0}^{j-1} {\mathbb {E}}\big [\Vert {{\mathfrak {f}}}(X^{i})\Vert ^2\big ]\, . \end{aligned} \end{aligned}$$
(3.8)

On noting that \(\Vert F(u)\Vert _{{{\mathbb {L}}}^1} = \frac{1}{4}\Vert {\mathfrak f}(u)\Vert ^2\), assertion (i) now follows with the help of the discrete Gronwall lemma.

(ii) The second estimate can be shown along the lines of the first part of the proof by applying \(\max _{j}\) before taking the expectation in (3.8). The additional term that arises from the terms \(A_2\), \(A_4\) in (3.5) can be rewritten by using the second equation in Scheme 3.1,

$$\begin{aligned} \begin{aligned}&\displaystyle {\mathbb {E}}\left[ \max _{1\le i\le j}\right| \sum _{\ell =1}^i\Big \{ \varepsilon ^{\gamma -1} \bigl ( f(X^{\ell -1}), g\bigr ) + \varepsilon ^{\gamma +1} \bigl ( \nabla X^{\ell -1}, \nabla g\bigr )\Big \} \Delta _\ell W \Big |\Big ] \\&\displaystyle \quad = {\mathbb {E}}\Big [\max _{1\le i\le j}\Big | \sum _{\ell =1}^i \varepsilon ^{\gamma } \bigl (w^{\ell -1}, g\bigr ) \Delta _\ell W \Big |\Big ] = {\mathbb {E}}\Big [\max _{1\le i\le j}\Big | \sum _{\ell =1}^i \varepsilon ^{\gamma } \bigl ({\overline{w}}^{\ell -1}, g\bigr ) \Delta _\ell W \Big |\Big ] \\&\displaystyle \quad \le {\mathbb {E}}\Big [\max _{1\le i\le j}\Big | \sum _{\ell =1}^i \varepsilon ^{\gamma } \bigl ({\overline{w}}^{\ell -1}, g\bigr ) \Delta _\ell W \Big |^2\Big ]^{1/2} \, , \end{aligned} \end{aligned}$$
(3.9)

where the equality in the second line follows from the zero mean property of the noise.

The last sum in (3.9) is a discrete square-integrable martingale, and by the independence properties of the summands, the Poincaré inequality and the energy estimate (i) we have

$$\begin{aligned} \displaystyle {\mathbb {E}}\Big [\Big ( \sum _{\ell =1}^i \varepsilon ^{\gamma } \bigl ({\overline{w}}^{\ell -1}, g\bigr ) \Delta _\ell W \Big )^2\Big ]= & {} \varepsilon ^{2\gamma }{\mathbb {E}}\Big [k\sum _{\ell =1}^i \bigl ({\overline{w}}^{\ell -1}, g\bigr )^2 \Big ] \\\le & {} C_{{\mathcal {D}}} \varepsilon ^{2\gamma } {\mathbb {E}}\Big [k\sum _{\ell =1}^i \bigl \Vert \nabla w^{\ell -1}\Vert ^2 \Vert g\Vert ^2_{{\mathbb {L}}^\infty } \Big ] \le C \varepsilon ^{2\gamma }. \end{aligned}$$

Therefore, (3.9) can be estimated using the discrete BDG-inequality (see Lemma 3.3) and part (i) by

$$\begin{aligned} \le \displaystyle C\varepsilon ^{\gamma }\Vert g\Vert _{{\mathbb {L}}^\infty } {\mathbb {E}}\Big [k\sum _{\ell =1}^J \bigl \Vert {\overline{w}}^{\ell -1}\Vert ^2\Big ]^{1/2} \le \displaystyle C \varepsilon ^{\gamma } {\mathbb {E}}\Big [\sum _{\ell =1}^J k\bigl \Vert \nabla w^{\ell -1}\Vert ^2 \Big ]^{1/2} \le C\varepsilon ^{\gamma }\, . \end{aligned}$$

(iii) We show assertion (iii) for \(p=2^1\). By collecting the estimates of the terms in (3.5) in part (i) (cf. (3.6), 3.7)) we deduce from (3.4) that

$$\begin{aligned} \begin{aligned}&{\mathcal {E}}(X^j) - {\mathcal {E}}(X^{j-1}) + \frac{\varepsilon }{4} \Vert \nabla [X^{j} - X^{j-1}]\Vert ^2 + \frac{1}{4\varepsilon } \Vert {\mathfrak {f}}(X^j) - {\mathfrak {f}}(X^{j-1})\Vert ^2 + \frac{k}{2} \Vert \nabla w^j\Vert ^2 \\&\quad \le C \Bigl ( \varepsilon {{\mathcal {E}}}(X^{j-1}) + 1\Bigr ) \vert \Delta _j W\vert ^2 + C \varepsilon ^{4\gamma -1} \vert \Delta _j W\vert ^4 + C (\varepsilon ^{2\gamma +1} + \varepsilon ^{2\gamma -3})\vert \Delta _j W\vert ^2 \\&\qquad + \varepsilon ^{\gamma +1} (\nabla X^{j-1}, \nabla g) \Delta _j W + \varepsilon ^{\gamma -1} \bigl ( f(X^{j-1}), g\bigr )\Delta _j W\, . \end{aligned} \end{aligned}$$
(3.10)

Multiply this inequality with \({\mathcal {E}}(X^j)\) and use the identity \((a-b)a = \frac{1}{2} [ a^2 - b^2 + (a-b)^2]\), the estimate \(\varepsilon ^{2\gamma +1} \le \varepsilon _0^{4}\varepsilon ^{2\gamma -3}\), Young’s inequality, and the generalized Hölder’s inequality to conclude

$$\begin{aligned} \begin{aligned}&\frac{1}{2} \Bigl [ \vert {{\mathcal {E}}}(X^j)\vert ^2 - \vert {{\mathcal {E}}}(X^{j-1})\vert ^2 + \vert {{\mathcal {E}}}(X^j) - {{\mathcal {E}}}(X^{j-1})\vert ^2\Bigr ] {+ \frac{\varepsilon }{4} \Vert \nabla [X^{j} - X^{j-1}]\Vert ^2 {{\mathcal {E}}}(X^j)} \\&\quad \le C \Bigl ( \varepsilon \vert {{\mathcal {E}}}(X^{j-1})\vert ^2 + {{\mathcal {E}}}(X^{j-1})\Bigr ) \vert \Delta _j W\vert ^2 + C \varepsilon ^{2\gamma -3} {{\mathcal {E}}}(X^{j-1}) \vert \Delta _j W\vert ^2 \\&\qquad + C\Bigl (\varepsilon ^2\vert {{\mathcal {E}}}(X^{j-1})\vert ^2 + 1 + \varepsilon ^{4\gamma -1}{{\mathcal {E}}}(X^{j-1}) + \varepsilon ^{2(2\gamma -3)} \Bigr ) \vert \Delta _j W\vert ^4 + C \varepsilon ^{2(4\gamma -1)} \vert \Delta _j W\vert ^8 \\&\qquad + \frac{1}{4} \bigl \vert {{\mathcal {E}}}(X^j) - {\mathcal E}(X^{j-1})\bigr \vert ^2 \\&\qquad + \Bigl [\varepsilon ^{\gamma +1} (\nabla X^{j-1}, \nabla g) \Delta _j W + \varepsilon ^{\gamma -1} \bigl ( f(X^{j-1}), g\bigr )\Delta _j W\Bigr ] {{\mathcal {E}}}(X^{j-1}) \\&\qquad + C \max \bigl \{ \Vert \nabla g\Vert ^2, \Vert g\Vert ^2_{{{\mathbb {L}}}^{\infty }} \bigr \}\Bigl [\varepsilon ^{2(\gamma +1)} \Vert \nabla X^{j-1}\Vert ^2 + \varepsilon ^{2(\gamma -1)} \Vert {\mathfrak {f}}(X^{j-1})\Vert ^2 \Vert X^{j-1}\Vert ^2 \Bigr ] \vert \Delta _j W\vert ^2\, . \end{aligned} \end{aligned}$$
(3.11)

We note that to get the above estimate we employed the reformulation \({{\mathcal {E}}}(X^{j}) = {{\mathcal {E}}}(X^{j-1}) + ({\mathcal E}(X^{j})-{{\mathcal {E}}}(X^{j-1}))\) on the right-hand side.

By Poincaré’s inequality, the last term in (3.11) may be bounded as

$$\begin{aligned}&\varepsilon ^{2(\gamma -1)} \Bigl [ \varepsilon ^4 \Vert \nabla X^{j-1}\Vert ^2 + \Vert {\mathfrak {f}}(X^{j-1})\Vert ^2 \Vert X^{j-1}\Vert ^2 \Bigr ] \vert \Delta _j W\vert ^2 \\&\quad \le C \varepsilon ^{2(\gamma -1)} \Bigl [\varepsilon ^{3} {{\mathcal {E}}}\bigl (X^{j-1}\bigr ) + \bigl \vert {{\mathcal {E}}}(X^{j-1})\bigr \vert ^2 \Bigr ] \vert \Delta _j W\vert ^2\, . \end{aligned}$$

After summing-up in (3.11) and taking expectations we get for any \(j\le J\) that

$$\begin{aligned} \begin{aligned}&\displaystyle \frac{1}{2}{\mathbb {E}}\big [{\mathcal {E}}(X^j)^2\big ] + \frac{1}{4}\sum _{i=1}^j{\mathbb {E}}\big [\big |{\mathcal {E}}(X^i) - {\mathcal {E}}(X^{i-1})\big |^2\big ] \\&\displaystyle \quad \le \displaystyle \frac{1}{2}{\mathbb {E}}\big [{\mathcal {E}}(X^{0})^2\big ] + C t_j + C (\varepsilon ^{2\gamma -3}+1+\varepsilon ^{4\gamma -1}k) k\sum _{i=0}^{j-1} {\mathbb {E}}\big [{\mathcal {E}}(X^{i})] \\&\quad \quad \displaystyle + C (\varepsilon ^{2(\gamma -1)}+\varepsilon + \varepsilon ^2k) k\sum _{i=0}^{j-1}{\mathbb {E}}\big [{\mathcal {E}}(X^{i})^2\big ], \end{aligned} \end{aligned}$$
(3.12)

where the third term is bounded via (3.8) in part (ii), and the statement then follows from the discrete Gronwall inequality.

For \(p = 2^r\), \(r=2\), we may now argue correspondingly: we start with (3.11), which we now multiply with \(\vert {\mathcal E}(X^j)\vert ^2\). Assertion (iii) now follows via induction with respect to r.

(iv) The last estimate follows analogously to (ii) from the BDG-inequality and (iii). \(\square \)

The error analysis of the implicit Scheme 3.1 in the subsequent Sect. 3.1 involves the use of a stopping index \(J_\varepsilon \), and an associated random variable \(\mathbb {1}_{\{j \le J_\varepsilon \}}\) that is measurable w.r.t. the \(\sigma \)-algebra \({\mathcal {F}}_{t_j}\), but not w.r.t. \({\mathcal {F}}_{t_{j-1}}\). This issue prohibits the use of the standard BDG-inequality since \(\mathbb {1}_{\{j \le J_\varepsilon \}}\) is not independent of the Wiener increment \(\Delta _j W\). The following lemma contains a discrete BDG-inequality which will be used in Sect. 3.1. We take \(\{ {{\mathcal {F}}}_{t_j}\}_{j=0}^J\) to be a discrete filtration associated with the time mesh \(\{ t_j\}_{j=0}^J \subset [0,T]\) on \((\Omega , {{\mathcal {F}}}, {{\mathbb {P}}})\).

Lemma 3.3

For every \(j=1,\dots , J\), let \(F_{j}\) be an \({\mathcal {F}}_{t_j}\)-measurable random variable, and \(\Delta _jW\) be independent of \(F_{j-1}\). Assume that the \(\{{\mathcal F}_{t_j}\}_j\)-martingale \(G_{\ell } := \sum _{j=1}^{{\ell }}F_{j-1}\Delta _j W\) (\(1 \le {\ell } \le J\)), with \(G_0=0\) be square-integrable. Then for any stopping index \(\tau : \Omega \rightarrow {{\mathbb {N}}}_0\) such that \(\mathbb {1}_{\{j\le \tau \}}\) is \({\mathcal {F}}_{t_j}\)-measurable, it holds that

$$\begin{aligned} {\mathbb {E}}\Big [\max _{{\ell }=1,\dots , {\tau \wedge J}}\big \vert \sum _{j=1}^{{\ell }}F_{j-1}\Delta _j W\big \vert ^2\Big ] \le 4{\mathbb {E}}\Big [\sum _{j=1}^{({\tau }+1)\wedge J}kF_{j-1}^2\Big ]\, , \end{aligned}$$

where \(\tau \wedge J = \min \{\tau ,J\}\).

Proof

We start by noting that

$$\begin{aligned} \sum _{j=1}^{(\tau +1)\wedge \ell } F_{j-1} \Delta _j W = \sum _{j=1}^{\ell } \mathbb {1}_{\{j-1\le \tau \}} F_{j-1} \Delta _j W \qquad (1 \le \ell \le J)\,. \end{aligned}$$

With this identity, we obtain

$$\begin{aligned} {\mathbb {E}}\Big [\max _{\ell =1,\dots ,\tau \wedge J}\big \vert \sum _{j=1}^{\ell } F_{j-1} \Delta _j W\big \vert ^2\Big ]&\le {\mathbb {E}}\Big [\max _{\ell =1,\dots , (\tau +1)\wedge J}\big \vert \sum _{j=1}^{\ell } F_{j-1} \Delta _j W\big \vert ^2\Big ] \nonumber \\&= {\mathbb {E}}\Big [\max _{\ell =1,\dots , J}\big \vert \sum _{j=1}^{\ell } \mathbb {1}_{\{j-1\le \tau \}} F_{j-1} \Delta _j W\big \vert ^2\Big ]\, . \end{aligned}$$
(3.13)

The random variable \(\mathbb {1}_{\{j-1\le \tau \}}\) is \({\mathcal {F}}_{t_{j-1}}\)-measurable, therefore, \({G}_\ell := \sum _{j=1}^{\ell } \mathbb {1}_{\{j-1\le \tau \}} F_{j-1} \Delta _jW\) is also a discrete square-integrable martingale. Hence, by the \(L^2\)-maximum martingale inequality, using the independence of \(\mathbb {1}_{\{j\le \tau \}} F_{j}\) and \(\Delta _\ell W\) for \(j < \ell \) it follows that

$$\begin{aligned}&{\mathbb {E}}\Big [\max _{\ell =1,\dots ,J}\big \vert \sum _{j=1}^{\ell } \mathbb {1}_{\{j-1\le \tau \}} F_{j-1} \Delta _j W\big \vert ^2\Big ] \le 4 {\mathbb {E}}\Big [ \big \vert \sum _{j=1}^{J}\mathbb {1}_{\{j-1\le \tau \}} F_{j-1} \Delta _j W\big \vert ^2\Big ] \nonumber \\&\quad \le 4{\mathbb {E}} \Big [\sum _{j=1}^{J} (\mathbb {1}_{\{j-1\le \tau \}} F_{j-1})^2 |\Delta _j W|^2\Big ] \nonumber \\&\qquad + 8\sum _{i,j=1;i<j}^{J} {\mathbb {E}} \big [\mathbb {1}_{\{i-1\le \tau \}} F_{i-1} \mathbb {1}_{\{j-1\le \tau \}} F_{j-1} \Delta _i W\big ] {\mathbb E}\big [ \Delta _j W\big ] \nonumber \\&\quad = 4\sum _{j=1}^{J} {\mathbb {E}} \Big [(\mathbb {1}_{\{j-1\le \tau \}} F_{j-1})^2\Big ] {\mathbb {E}} \Big [ |\Delta _j W|^2\Big ] = 4{\mathbb {E}}\Big [\sum _{j=1}^{(\tau +1)\wedge J} F_{j-1}^2 k\Big ]\, . \end{aligned}$$
(3.14)

The assertion of the lemma then follows from (3.13) and (3.14). \(\square \)

3.1 Error analysis

Denote \(Z^j:=X^j-X_{\texttt {CH}}^j\), use Scheme 3.1 for a fixed \(\omega \in \Omega \), and choose \(\varphi = (-\Delta )^{-1}Z^{j}(\omega )\), \(\psi = Z^j(\omega )\). We obtain \({\mathbb {P}}\)-a.s.

$$\begin{aligned} \begin{aligned} \frac{1}{2}&\Bigl (\Vert \Delta ^{-1/2}Z^j\Vert ^2- \Vert \Delta ^{-1/2}Z^{j-1}\Vert ^2+ \Vert \Delta ^{-1/2}[Z^j-Z^{j-1}]\Vert ^2\Bigr ) + k\varepsilon \Vert \nabla Z^j\Vert ^2\\&\quad +\frac{k}{\varepsilon }\bigl (f(X^j)-f(X_{\texttt {CH}}^j),Z^j\bigr ) =\varepsilon ^\gamma (\Delta ^{-1/2}g,\Delta ^{-1/2}Z^j)\Delta _jW\, . \end{aligned} \end{aligned}$$
(3.15)

We use Lemma 3.1, v) to obtain a first error bound.

Lemma 3.4

Assume \(\gamma > \frac{3}{2}\), \(\Vert u^\varepsilon _0\Vert _{{{\mathbb {H}}}^3} \le C \varepsilon ^{-{\mathfrak p}_{\texttt {CH}}}\) for \(\varepsilon \in (0,\varepsilon _0)\), and let \(k \le C \varepsilon ^{{{\mathfrak {l}}}_{\texttt {CH}}}\) with \({\mathfrak l}_{\texttt {CH}} \ge 3\) from Lemma 3.1 be sufficiently small. There exists \(C>0\), such that \({\mathbb {P}}\)-a.s. and for all \(1\le {\ell } \le J\),

$$\begin{aligned}&\max _{1\le j\le {\ell }}\Vert \Delta ^{-1/2}Z^j\Vert ^2+{\varepsilon ^4 k}\sum _{j=1}^{{\ell }}\Vert \nabla Z^j\Vert ^2\nonumber \\&\quad \le \frac{Ck}{\varepsilon }\sum _{j=1}^{{\ell }}\Vert Z^j\Vert _{{\mathbb {L}}^3}^3+ C\varepsilon ^\gamma \max _{1\le j\le {\ell }}|\sum _{i=1}^{j}((-\Delta )^{-1}g,Z^{i-1})\Delta _i W| +C\varepsilon ^{2\gamma } \sum _{j=1}^{{\ell }}|\Delta _j W|^2. \nonumber \\ \end{aligned}$$
(3.16)

Proof

1. Consider the last term on the left-hand side of (3.15). On recalling \(Z^j=X^j-X_{\texttt {CH}}^j\), by a property of f, see [17, eq. (2.6)], and Lemma 3.1, (iii), we get for some \(C>0\)

$$\begin{aligned} \begin{aligned}&\bigl (f(X^j)-f(X_{\texttt {CH}}^j),Z^j\bigr )=\bigl (f(X_{\texttt {CH}}^j)-f(X^j),X_{\texttt {CH}}^j-X^j\bigr )\\&\quad \ge \bigl (f'(X_{\texttt {CH}}^j)[X_{\texttt {CH}}^j-X^j],X_{\texttt {CH}}^j-X^j \bigr )-3\bigl (X_{\texttt {CH}}^j \vert X_{\texttt {CH}}^j-X^j \vert ^2,X_{\texttt {CH}}^j-X^j\bigr )\\&\quad \ge (1-\varepsilon ^3)\bigl (f'(X_{\texttt {CH}}^j)Z^j,Z^j\bigr )-C\Vert Z^j\Vert _{{\mathbb {L}}^3}^3 + \varepsilon ^3\bigl (f'(X_{\texttt {CH}}^j)Z^j,Z^j\bigr ). \end{aligned} \end{aligned}$$
(3.17)

2. In order to later keep a portion of \(\Vert \nabla Z^j\Vert ^2\) on the left-hand side of (3.15) we use the identity

$$\begin{aligned}&\displaystyle \varepsilon \Vert \nabla Z^j\Vert ^2 + \frac{(1-\varepsilon ^3)}{\varepsilon }\bigl (f'(X_{\texttt {CH}}^j)Z^j,Z^j\bigr ) \nonumber \\&\quad =\displaystyle (1-\varepsilon ^3) \left( \varepsilon \Vert \nabla Z^j\Vert ^2 + \frac{(1-\varepsilon ^3)}{\varepsilon }\bigl (f'(X_{\texttt {CH}}^j)Z^j,Z^j\bigr )\right) \nonumber \\&\quad \quad \displaystyle + \varepsilon ^3 \left( \varepsilon \Vert \nabla Z^j\Vert ^2 + \frac{(1-\varepsilon ^3)}{\varepsilon }\bigl (f'(X_{\texttt {CH}}^j)Z^j,Z^j\bigr )\right) . \end{aligned}$$
(3.18)

We apply Lemma 3.1, v) to get a lower bound for the first term on the right-hand side,

$$\begin{aligned} \ge -(C_0+1)\Vert \Delta ^{-1/2}Z^j\Vert ^2_{{\mathbb {L}}^2}. \end{aligned}$$

On noting \(\varepsilon <1\), we estimate the remaining nonlinearities in (3.18) using Lemma 3.1, (iii),

$$\begin{aligned} \varepsilon ^2 \bigl (f'(X_{\texttt {CH}}^j)Z^j,Z^j\bigr ) \le {C\varepsilon ^2} \Vert \nabla Z^j\Vert \Vert \Delta ^{-1/2}Z^j\Vert \le \frac{\varepsilon ^4}{4}\Vert \nabla Z^j\Vert ^2 + C\Vert \Delta ^{-1/2}Z^j\Vert ^2. \end{aligned}$$

3. We insert the estimates from the steps 1. and 2. into (3.15), and use the bound

$$\begin{aligned} \varepsilon ^{\gamma } ((-\Delta )^{-1}g, Z^j - Z^{j-1})\Delta _j W \le \frac{1}{4} \Vert \Delta ^{-1/2} [Z^j - Z^{j-1}]\Vert ^2 + \varepsilon ^{2\gamma } \vert \Delta _j W\vert ^2 \Vert \Delta ^{-1/2} g\Vert ^2 \end{aligned}$$
(3.19)

to validate

$$\begin{aligned}&\frac{1}{2}\Bigl (\Vert \Delta ^{-1/2}Z^j\Vert ^2-\Vert \Delta ^{-1/2}Z^{j-1}\Vert ^2+ \frac{1}{2}\Vert \Delta ^{-1/2}[Z^j-Z^{j-1}]\Vert ^2 + \frac{\varepsilon ^4}{4} k \Vert \nabla Z^j\Vert ^2\Bigr ) \\&\quad \le Ck\Vert \Delta ^{-1/2}Z^j\Vert ^2 +\frac{Ck}{\varepsilon }\Vert Z^j\Vert _{{\mathbb {L}}^3}^3 +\varepsilon ^\gamma (\Delta ^{-1/2}g, \Delta ^{-1/2} Z^{j-1})\Delta _jW \\&\qquad + C \varepsilon ^{2\gamma } \vert \Delta _j W\vert ^2 \, . \end{aligned}$$

4. We sum the last inequality from \(j=1\) up to \(j={\ell }\), and consider \(\max _{j\le {\ell }}\). On noting \(Z^0=0\), we obtain \({{\mathbb {P}}}\)-a.s.

$$\begin{aligned} A_{\ell } \le {C}{{\mathcal {R}}}_{\ell } + Ck\sum _{i=1}^{\ell } A_i\qquad (1 \le {\ell }\le J)\, , \end{aligned}$$

where

$$\begin{aligned} \begin{aligned}&A_{\ell } = \frac{1}{2} \max _{1\le j\le {\ell }}\Vert \Delta ^{-1/2}Z^j\Vert ^2 {+ \frac{1}{2} \sum _{i=1}^{\ell }\Vert \Delta ^{-1/2}[Z^j - Z^{j-1}]\Vert ^2} +\varepsilon ^4 k\sum _{i=1}^{{\ell }}\Vert \nabla Z^i\Vert ^2\, , \\&{{\mathcal {R}}}_{\ell } = \frac{k}{\varepsilon }\sum _{j=1}^{{\ell }}\Vert Z^j\Vert _{{\mathbb {L}}^3}^3+ \varepsilon ^\gamma \max _{1\le j\le {\ell }}|\sum _{i=1}^{j}((-\Delta )^{-1}g,Z^{i-1})\Delta _i W| + \varepsilon ^{2\gamma }\sum _{i=1}^{{\ell }}|\Delta _iW|^2\, . \end{aligned} \end{aligned}$$
(3.20)

Hence, the implicit version of the discrete Gronwall lemma implies for sufficiently small \(k \le k_0({{\mathcal {D}}})\) that \({{\mathbb {P}}}\)-a.s.

$$\begin{aligned} A_{\ell } \le C {\mathcal {R}}_{\ell } \qquad \forall \, {\ell } \le J\, , \end{aligned}$$
(3.21)

which concludes the proof. \(\square \)

In the deterministic setting (\(g\equiv 0\)), an induction argument, along with an interpolation estimate for the \({\mathbb {L}}^3\)-norm is used to estimate the cubic error term on the right-hand side of (3.16); cf. [16]. In the stochastic setting, this induction argument is not applicable any more, which is why we separately bound errors in (3.16) on two subsets \(\Omega _2\) and \(\Omega \setminus \Omega _2\). In the first step, we study accumulated errors on \(\Omega _2\) locally in time, and therefore mimic a related (time-continuous) argument in [3]. We introduce the stopping index \(1 \le J_\varepsilon \le J\)

$$\begin{aligned} J_\varepsilon :=\inf \bigl \{{1\le j\le J}:\,\, \frac{k}{\varepsilon } \sum _{i=1}^{j} \Vert Z^{i}\Vert _{{\mathbb {L}}^3}^3 > \varepsilon ^{\sigma _0} \bigr \}\, , \end{aligned}$$

where the constant \(\sigma _0>0\) will be specified later. The purpose of the stopping index is to identify those \(\omega \in \Omega \) where the cubic error term is small enough. In the sequel, we estimate the terms on the right-hand side of (3.16), putting \({\ell } = J_{\varepsilon }\). Clearly, the part \(\frac{k}{\varepsilon } \sum _{i=1}^{J_{\varepsilon }-1} \Vert Z^i \Vert _{{{\mathbb {L}}}^3}^3\) of \({{\mathcal {R}}}_{J_{\varepsilon }}\) in (3.20) is bounded by \(\varepsilon ^{\sigma _0}\); the remaining part will be denoted by \(\widetilde{\mathcal R}_{J_{\varepsilon }} := {{\mathcal {R}}}_{J_{\varepsilon }} - \frac{k}{\varepsilon } \sum _{i=1}^{J_{\varepsilon }-1} \Vert Z^i \Vert _{{{\mathbb {L}}}^3}^3\), i.e.,

$$\begin{aligned} \widetilde{{\mathcal {R}}}_{J_\varepsilon } = \varepsilon ^\gamma \max _{1\le j\le J_\varepsilon }\bigl |\sum _{i=1}^{j}\bigl ((-\Delta )^{-1}g,Z^{i-1}\bigr )\Delta _iW \bigr |+\varepsilon ^{2\gamma }\sum _{j=1}^{J_\varepsilon }|\Delta W_j|^2 + \frac{k}{\varepsilon }\Vert Z^{J_\varepsilon }\Vert _{{\mathbb {L}}^3}^3\,. \end{aligned}$$

For \(0< \kappa _0 < \sigma _0\), we gather those \(\omega \in \Omega \) in the subset

$$\begin{aligned} \Omega _2 := \bigl \{\omega \in \Omega : \, \widetilde{{\mathcal {R}}}_{J_\varepsilon }(\omega ) \le {\varepsilon }^{\kappa _0} \bigr \} \end{aligned}$$

where the error terms in Lemma 3.4 which cannot be controlled by the stopping index \(J_\varepsilon \) do not exceed the larger error threshold \(\varepsilon ^{\kappa _0}\). The following lemma quantifies the possible error accumulation in time on \(\Omega _2\) up to the stopping index \(J_\varepsilon \) in terms of \(\sigma _0, \kappa _0 >0\), and illustrates the role of k in this matter; it further provides a lower bound for the measure of \(\Omega _2\) correspondingly.

Lemma 3.5

Assume \(\gamma > \frac{3}{2}\), \(0< \kappa _0 < \sigma _0\), \(\Vert u^\varepsilon _0\Vert _{{{\mathbb {H}}}^3} \le C \varepsilon ^{-{{\mathfrak {p}}}_{\texttt {CH}}}\) for \(\varepsilon \in (0,\varepsilon _0)\), and let \(k \le C \varepsilon ^{{\mathfrak l}_{\texttt {CH}}}\) with \({{\mathfrak {l}}}_{\texttt {CH}} \ge 3\) from Lemma 3.1 be sufficiently small. Then, there exists \(C >0\) such that

$$\begin{aligned} \mathrm{(i)}&\max _{1\le i\le J_\varepsilon }\Vert \Delta ^{-1/2}Z^i\Vert ^2+ {\varepsilon ^4} k\sum _{i=1}^{J_\varepsilon }\Vert \nabla Z^i\Vert ^2 \le C \varepsilon ^{\kappa _0} \qquad \text{ on } \Omega _2\, , \\ \mathrm{(ii)}&{{\mathbb {E}}}\Bigl [ \mathbb {1}_{\Omega _2} \Bigl (\max _{1\le i\le J_\varepsilon }\Vert \Delta ^{-1/2}Z^i\Vert ^2+\frac{\varepsilon ^4}{2} k\sum _{i=1}^{J_\varepsilon }\Vert \nabla Z^i\Vert ^2 \Bigr )\Bigr ]\le C\max \bigl \{ \frac{k^2}{\varepsilon ^4},\varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0}, \varepsilon ^{2\gamma }\bigr \}\, . \end{aligned}$$

Moreover, \({\mathbb {P}}[\Omega _2] \ge 1- \frac{C}{\varepsilon ^{\kappa _0}} \max \bigl \{ \frac{k^2}{\varepsilon ^4}, \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0},\varepsilon ^{2\gamma }\bigr \}\).

The proof uses the discrete BDG-inequality (Lemma 3.3), which is suitable for the implicit Scheme 3.1; we use the higher-moment estimates from Lemma 3.2, (iii) to bound the last term in \(\widetilde{{\mathcal {R}}}_{J_{\varepsilon }}\).

Proof

1. Estimate (i) follows directly from Lemma 3.4, using the definitions of \(J_{\varepsilon }\) and \(\Omega _2\).

2. Let \(\Omega _2^{c} := \Omega \setminus \Omega _2\). We use Markov’s inequality to estimate \({\mathbb {P}}[\Omega _{2}^c]\le {\frac{1}{\varepsilon ^{\kappa _0}}}{\mathbb {E}}[\widetilde{{\mathcal {R}}}_{J_\varepsilon }]\). We first estimate the last term in \(\widetilde{\mathcal R}_{J_{\varepsilon }}\): interpolation of \({{\mathbb {L}}}^3\) between \({{\mathbb {L}}}^2\) and \({{\mathbb {H}}}^1\), then of \({{\mathbb {L}}}^2\) between \({{\mathbb {H}}}^{-1}\) and \({{\mathbb {H}}}^1\) (\({{\mathcal {D}}} \subset {{\mathbb {R}}^2}\)) and the Young’s inequality yield

$$\begin{aligned} \frac{k}{\varepsilon } \Vert Z^{J_{\varepsilon }}\Vert ^3_{{\mathbb {L}}^3} \le \frac{Ck}{\varepsilon } \Vert Z^{J_{\varepsilon }}\Vert _{{{\mathbb {H}}}^{-1}} \Vert \nabla Z^{J_{\varepsilon }}\Vert ^2_{{{\mathbb {L}}}^{2}} \le \frac{1}{8} \Vert \Delta ^{-1/2} Z^{J_{\varepsilon }}\Vert ^2_{{{\mathbb {L}}}^{2}} + \frac{C k^2}{\varepsilon ^2} \Vert \nabla Z^{J_{\varepsilon }}\Vert ^4_{{{\mathbb {L}}}^2}\,. \end{aligned}$$
(3.22)

The leading term on the right-hand side is absorbed on the left-hand side of the inequality in Lemma 3.4, which is considered on the whole of \(\Omega \); the expectation of the last term (on the whole of \(\Omega \)) is bounded via Lemma 3.2, iv) by \(\frac{Ck^2}{\varepsilon ^4} \bigl ( \vert {\mathcal E}(u_0^\varepsilon )\vert ^2 +1\bigr )\).

For the first term in \(\widetilde{{\mathcal {R}}}_{J_{\varepsilon }}\) we use the discrete BDG-inequality (Lemma 3.3) to bound its expectation by

$$\begin{aligned} C \varepsilon ^\gamma {\mathbb {E}}\left[ \sum _{i=1}^{J_\varepsilon +1}k \bigl ((-\Delta )^{-1}g,Z^{i-1} \bigr )^2\right] ^{\frac{1}{2}} \, .\end{aligned}$$

In order to benefit from the definition of \(J_{\varepsilon }\) for its estimate, we split the leading summand,

$$\begin{aligned}= & {} C \varepsilon ^\gamma {\mathbb {E}}\left[ \sum _{i=1}^{J_\varepsilon }k \vert \bigl ((-\Delta )^{-1}g,Z^{i-1} \bigr ) \vert ^2\right] ^{\frac{1}{2}} + C\sqrt{k} \varepsilon ^{\gamma } {{\mathbb {E}}} \bigl [ \vert ((-\Delta )^{-1}g, Z^{J_{\varepsilon }})\vert ^2 \bigr ]^{\frac{1}{2}} \\\le & {} C\varepsilon ^\gamma {\mathbb {E}}\Bigl [k \bigl (\sum _{i=1}^{J_\varepsilon -1}\Vert Z^{i}\Vert _{{\mathbb {L}}^3}^{3}\bigr )^{\frac{2}{3}} \bigl (\sum _{i\le J}1^3 \bigr )^{\frac{1}{3}}\Bigr ]^{\frac{1}{2}} + C\sqrt{k} \varepsilon ^{\gamma } {{\mathbb {E}}} \bigl [ \vert (\nabla (-\Delta )^{-1}g, \nabla (-\Delta )^{-1}Z^{J_{\varepsilon }})\vert ^2 \bigr ]^{\frac{1}{2}} \\\le & {} C \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}} + C \sqrt{k} \varepsilon ^{\gamma } {{\mathbb {E}}}\left[ \Vert \Delta ^{-1/2} Z^{J_{\varepsilon }}\Vert ^2\right] ^{\frac{1}{2}} \le C \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}} + C k \varepsilon ^{2\gamma } + \frac{1}{8} {{\mathbb {E}}}\bigl [\Vert \Delta ^{-1/2} Z^{J_{\varepsilon }}\Vert ^2\bigr ]\, . \end{aligned}$$

Putting things together leads to \({\mathbb E}[\frac{1}{2}A_{J_{\varepsilon }}] \le C \left( \varepsilon ^{\sigma _0} + \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}} + \varepsilon ^{2\gamma } + \frac{k^2}{\varepsilon ^4}\right) \). Revisiting (3.22) again then yields from Lemma 3.4

$$\begin{aligned} {{\mathbb {E}}}[\widetilde{{\mathcal {R}}}_{J_{\varepsilon }}] \le C\max \left\{ \frac{k^2}{\varepsilon ^4}, \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0}, \varepsilon ^{2\gamma }\right\} \, . \end{aligned}$$
(3.23)

3. Consider the inequality in Lemma 3.4 on \(\Omega _2\). The estimate (ii) then follows after taking expectation, using (3.23) and recalling the definition of \(J_\varepsilon \). \(\square \)

The previous lemma establishes local error bounds for iterates of Scheme 3.1 – by using the stopping index \(J_\varepsilon \), and the subset \(\Omega _2 \subset \Omega \); the following lemma identifies values \((\gamma , \sigma _0, \kappa _0)\) such that Lemma 3.5 remains valid globally in time on \(\Omega _2\).

Lemma 3.6

Let the assumptions in Lemma 3.5 be valid. Assume

$$\begin{aligned} \sigma _0> 10\,, \qquad \sigma _0> \kappa _0 > \frac{2}{3}(\sigma _0 + 5)\, . \end{aligned}$$

There exists \(\varepsilon _0\equiv \varepsilon _0(\sigma _0, \kappa _0)\), such that for every \(\varepsilon \in (0,\varepsilon _0)\)

$$\begin{aligned} J_\varepsilon (\omega )=J \qquad \forall \, \omega \in \Omega _2\, . \end{aligned}$$

Moreover, \(\lim _{\varepsilon \downarrow 0}{{\mathbb {P}}}[\Omega _2] =1\) if

$$\begin{aligned} \gamma > \max \{{\frac{19}{3}}, \frac{\kappa _0}{2}\}\,, \qquad k^2 \le C\varepsilon ^{4+\kappa _0+\beta }\, , \end{aligned}$$

where \(\beta > 0\) may be arbitrarily small.

Compared to assumption (A), the less restrictive lower bound for \(\gamma \) is due to the use of the discrete spectral estimate (see Lemma 3.1, v)), which introduces a factor \(\varepsilon ^{-4}\) that is absorbed into \(\varepsilon ^{\frac{3}{2}\kappa _0}\) in the proof below. Consequently we only need to require \(\gamma \ge \frac{19}{3}\) in order to ensure positive probability of \(\Omega _2\).

Proof

1. Assume that \(J_\varepsilon < J\) on \(\Omega _2\); we want to verify that

$$\begin{aligned} \frac{k}{\varepsilon }\sum _{i=1}^{J_\varepsilon }\Vert Z^{i}\Vert _{{\mathbb {L}}^3}^3 \le \varepsilon ^{\sigma _0} \qquad \text{ on }\,\, \Omega _2\, . \end{aligned}$$

Use (3.22), and the estimate Lemma 3.5 (i) to conclude

$$\begin{aligned}&\frac{k}{\varepsilon }\sum _{i=1}^{J_\varepsilon }\Vert Z^{i}\Vert _{{\mathbb {L}}^3}^3 \le \frac{C}{\varepsilon } \max _{1\le i\le J_\varepsilon }\Vert \Delta ^{-1/2} Z^i\Vert _{{\mathbb {L}}^{2}}\Big (\sum _{i=1}^{J_\varepsilon }k\Vert \nabla Z^i\Vert ^2\Big ) \le C \varepsilon ^{-1 + \frac{\kappa _0}{2} +(\kappa _0-4)}\quad \\&\quad \mathrm {on}\,\,\Omega _2 \, . \end{aligned}$$

The right-hand side above is below \(\varepsilon ^{\sigma _0}\) for \(\frac{3\kappa _0}{2} > \sigma _0 + 5\) and \(\varepsilon < \varepsilon _0\) with sufficiently small \(\varepsilon _0\equiv \varepsilon _0(\sigma _0, \kappa _0)\). The additional condition \(\kappa _0 < \sigma _0\) (which will be required in step 2. below) imposes that \(\sigma _0 > 10\).

2. Recall that the last part in Lemma 3.5 yields \({\mathbb {P}}[\Omega _2] \ge 1- C\varepsilon ^{-\kappa _0} \max \bigl \{ \frac{k^2}{\varepsilon ^4}, \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0},\varepsilon ^{2\gamma }\bigr \}\). Hence, to ensure \({\mathbb {P}}[\Omega _2] > 0\) requires \(\gamma + \frac{\sigma _0+1}{3} -\kappa _0 >0\), \(\sigma _0>\kappa _0\), \(\gamma > \frac{\kappa _0}{2}\) and \(k^2 \le C \varepsilon ^{4+\kappa _0+\beta }\), \(\beta >0\). In addition, by step 1., \(\kappa _0 > \frac{2}{3}(\sigma _0 + {5})\), \(\sigma _0>10\), which along with \(\gamma + \frac{\sigma _0+1}{3} -\kappa _0 >0\), \(\sigma _0>\kappa _0\) implies \(\gamma \ge {\frac{19}{3}}\). \(\square \)

Next, we bound \(\max _{1\le i\le J}\Vert \Delta ^{-1/2}Z^i\Vert ^2+\frac{\varepsilon ^4}{2} k\sum _{i=1}^{J}\Vert \nabla Z^i\Vert ^2\) on the whole sample set. We collect the requirements on the analytical and numerical parameters:

(B):

Let \(u^\varepsilon _0 \in {{\mathbb {H}}}^3\), \({\mathcal {E}}(u_0^\varepsilon )<C\). Assume that \((\sigma _0, \kappa _0, \gamma )\) satisfy

$$\begin{aligned}\sigma _0>10, \qquad { \sigma _0}> \kappa _0 > \frac{2}{3}(\sigma _0 + 5), \qquad \gamma \ge \max \{ {\frac{19}{3}}, \frac{\kappa _0}{2}\}\, .\end{aligned}$$

For sufficiently small \(\varepsilon _0 \equiv (\sigma _0,\kappa _0) >0\) and \({{\mathfrak {l}}}_{\texttt {CH}} \ge 3\) from Lemma 3.1, and arbitrary \(0< \beta < \frac{1}{2}\), the time-step satisfies

$$\begin{aligned} k \le C \min \bigl \{\varepsilon ^{{{\mathfrak {l}}}_{\texttt {CH}}}, \varepsilon ^{2+\frac{\kappa _0}{2}+\beta }\bigr \} \qquad \forall \, \varepsilon \in (0,\varepsilon _0).\end{aligned}$$

We note that, except for the higher regularity of the initial condition, the assumption (B) is less restrictive than the assumption (A) from Sect. 2. Furthermore, the condition \({\mathcal {E}}(u_0^\varepsilon ) < C\) can be weakened to \({\mathcal {E}}(u_0^\varepsilon ) < C\varepsilon ^{-\alpha }\), \(\alpha >0\), cf. [17, Assumption (\(\hbox {GA}_2\))].

Lemma 3.7

Suppose (B). Then there exists \(C>0\) such that

$$\begin{aligned} {\mathbb {E}}\big [\max _{1\le j\le J}\Vert Z^j\Vert ^2_{{\mathbb {H}}^{-1}} + {\varepsilon ^4} k\sum _{i=1}^{J}\Vert \nabla Z^i\Vert ^2 \bigr ] \le \Bigl (\frac{C}{\varepsilon ^{\kappa _0}} \max \bigl \{ \frac{k^2}{\varepsilon ^4}, \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0},\varepsilon ^{2\gamma }\bigr \}\Bigr )^{\frac{1}{2}}\,. \end{aligned}$$

Proof

Recall the notation from (3.20), and split \({\mathbb {E}}[A_{J}] = {\mathbb {E}}[\mathbb {1}_{\Omega _2} A_J] + {\mathbb {E}}[\mathbb {1}_{\Omega ^c_2} A_J]\). Due to assumption (B) it follows directly from Lemma 3.5, (ii) and Lemma 3.6 that

$$\begin{aligned} {{\mathbb {E}}}[\mathbb {1}_{\Omega _2} A_J] \le C\max \bigl \{ \frac{k^2}{\varepsilon ^4},\varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0}, \varepsilon ^{2\gamma }\bigr \}\,. \end{aligned}$$
(3.24)

In order to bound \({{\mathbb {E}}}[\mathbb {1}_{\Omega ^c_2} A_J]\), we use the embedding \({\mathbb {L}}^4\subset {\mathbb {H}}^{-1}\) which along with the higher-moment estimate from Lemma 3.2 iv) implies that

$$\begin{aligned} {\mathbb {E}}\big [A_J^2\big ] \le C {\mathbb {E}}\big [|{\mathcal {E}}(X^J)|^2\big ] \le C(|{\mathcal {E}}(u_\varepsilon ^0)|^2+1)\,. \end{aligned}$$

Next, we note that by Lemma 3.5 it follows that

$$\begin{aligned} {{\mathbb {P}}}[\Omega ^c_2]\le 1 - {{\mathbb {P}}}[\Omega _2]\le \frac{C}{\varepsilon ^{\kappa _0}} \max \bigl \{ \frac{k^2}{\varepsilon ^4}, \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0},\varepsilon ^{2\gamma }\bigr \}\,. \end{aligned}$$

Hence, using the Cauchy-Schwarz inequality we get

$$\begin{aligned}&{{\mathbb {E}}}[\mathbb {1}_{\Omega ^c_2} A_J] \le \bigl ({\mathbb {P}}[\Omega _2^c]\bigr )^{1/2} \bigl ({{\mathbb {E}}}[A_J^2]\bigr )^{1/2} \nonumber \\&\quad \le \Bigl (\frac{C}{\varepsilon ^{\kappa _0}} \max \bigl \{ \frac{k^2}{\varepsilon ^4}, \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0},\varepsilon ^{2\gamma }\bigr \}\Bigr )^{\frac{1}{2}} \bigl ( {{\mathcal {E}}}(u^\varepsilon _0)+1\bigr )\,. \end{aligned}$$
(3.25)

After inspecting (3.24), (3.25) we note that the statement follows by assumption (B), since the latter contribution dominates the error. \(\square \)

The dominating error contribution in Lemma 3.7 comes from the term \({{\mathbb {E}}}[\mathbb {1}_{\Omega ^c_2} A_J]\). This is in contrast to Sect. 2 where the error contribution from the set \(\Omega _1^c\) can be made arbitrarily small, due to the additional parameter \({\mathfrak {l}}>0\) in Lemma 2.2 which can be chosen arbitrarily large independently of the other parameters.

We are now ready to prove the first main result of this paper.

Theorem 3.8

Let \(u_0^\varepsilon \in {\mathbb {H}}^3\), let u be the strong solution of (1.1), and let \(\left\{ X^j,\ j=1,\dots , J\right\} \) solve Scheme 3.1. Suppose (A). Then there exists a constant \(C>0\) such that for all \(0 {< } \beta < \frac{1}{2}\)

$$\begin{aligned}&{\mathbb {E}}\big [\max _{1\le j\le J}\Vert u(t_j)-X^j\Vert _{{\mathbb {H}}^{-1}}^2 \bigr ] \\&\quad \le C \max \Bigl \{{\varepsilon ^{\frac{2}{3}\sigma _0}}, \Bigl (\varepsilon ^{-\kappa _0} \max \bigl \{ \frac{k^2}{\varepsilon ^4}, \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0},\varepsilon ^{2\gamma }\bigr \}\Bigr )^{\frac{1}{2}}, \frac{k^{2-\beta }}{\varepsilon ^{{{\mathfrak {m}}}_{\texttt {CH}}}}\Bigr \}\, . \end{aligned}$$

Due to condition (A)\(_2\) it holds that \(\sigma _0-\kappa _0 < \frac{1}{3}\sigma _0\). Consequently the contribution \(\varepsilon ^{\frac{2}{3}\sigma _0}\) in the error estimate is dominated by \(\varepsilon ^{\frac{\sigma _0-\kappa _0}{2}}\); it is only stated explicitly to highlight the error contribution from the difference \(u-u_{\texttt {CH}}\) from Sect. 2.

Proof

We estimate the error via splitting it into three contributions,

$$\begin{aligned}&\max _{1\le j\le J}\Vert u(t_j)-u_{\texttt {CH}}(t_j)\Vert _{{\mathbb {H}}^{-1}}^2 + \max _{1\le j\le J}\Vert u_{\texttt {CH}}(t_j) \\&\quad - X^j_{\texttt {CH}}\Vert _{{\mathbb {H}}^{-1}}^2 + \max _{1\le j\le J}\Vert X^j_{\texttt {CH}} - X^j\Vert _{{\mathbb {H}}^{-1}}^2 =: I + II + III\, . \end{aligned}$$

Lemma 2.3 bounds \({{\mathbb {E}}}[I]\), Lemma 3.1, iv) yields \({{\mathbb {E}}}[II] \le \frac{k^{2-\beta }}{\varepsilon ^{{{\mathfrak {m}}}_{\texttt {CH}}}}\), and \({{\mathbb {E}}}[III]\) is bounded in Lemma 3.7. \(\square \)

Remark 3.9

An alternative approach to Theorem 3.8 would be to follow the arguments in [23] for a related problem, which exploit a weak monotonicity property of the drift operator in (1.1), and stability of the discretization to obtain a strong error estimate for Scheme 3.1 of the form

$$\begin{aligned} {\mathbb {E}}\Big [\max _{1\le j\le J}\Vert u(t_j)-X^j\Vert _{{\mathbb {H}}^{-1}}^2\Big ] \le C_{\beta } \exp \bigl (\frac{T}{\varepsilon } \bigr )k^{1-\beta } \qquad (\beta > 0)\, . \end{aligned}$$
(3.26)

While the error tends to zero for \(k \downarrow 0\) in (3.26), this estimate is only of limited practical relevancy in the asymptotic regime where \(\varepsilon \) is small, since only prohibitively small step sizes \(k \ll \exp (-\frac{1}{\varepsilon })\) are required in (3.26) to guarantee small approximation errors for iterates from Scheme 3.1. Moreover, the error analysis that leads to (3.26) does not provide any insight on how to numerically resolve diffuse interfaces via proper balancing of discretization parameter k and interface width \(\varepsilon \)—which is relevant in the asymptotic regime where \(\varepsilon \ll 1\).

4 Space–time discretization of (1.1)

We generalize the convergence results in Sect. 3 for Scheme 3.1 to its space–time discretization. For this purpose, we introduce some further notations: let \({{\mathcal {T}}} _h\) be a quasi-uniform triangulation of \({{\mathcal {D}}}\), and \({\mathbb V}_h \subset {{\mathbb {H}}}^1\) be the finite element space of piecewise affine, globally continuous functions,

$$\begin{aligned}{{\mathbb {V}}}_h := \bigl \{ v_h \in C({\overline{D}});\, v_h \bigl \vert _K \in P_1(K) \quad \forall \, K \in {{\mathcal {T}}}_h\bigr \}\, ,\end{aligned}$$

and \(\mathring{{{\mathbb {V}}}}_h := \bigl \{ v_h \in {{\mathbb {V}}}_h:\, (v_h,1) = 0\bigr \}\). We recall the \({{\mathbb {L}}}^2\)-projection \(P_{{{\mathbb {L}}}^2}: {{\mathbb {L}}}^2 \rightarrow {{\mathbb {V}}}_h\), via

$$\begin{aligned}\bigl ( P_{{{\mathbb {L}}}^2} v -v, \eta _h\bigr ) = 0 \qquad \forall \, \eta _h \in {{\mathbb {V}}}_h\, , \end{aligned}$$

and the Riesz projection \(P_{{{\mathbb {H}}}^1}: {{\mathbb {H}}}^1 \cap {{\mathbb {L}}}^2_0 \rightarrow \mathring{{\mathbb {V}}}_h\), via

$$\begin{aligned}\bigl ( \nabla [P_{{{\mathbb {H}}}^1} v -v], \nabla \eta _h\bigr ) = 0 \qquad \forall \, \eta _h \in {{\mathbb {V}}}_h\, .\end{aligned}$$

In what follows, we allow meshes \({{\mathcal {T}}}_h\) for which \(P_{{{\mathbb {L}}}^2}\) is \({{\mathbb {H}}}^1\)-stable; see [10]. Also, we define the inverse discrete Laplacian \((-\Delta _h)^{-1}: {\mathbb {L}}^2_0 \rightarrow \mathring{{\mathbb {V}}}_h\) via

$$\begin{aligned}\bigl (\nabla (-\Delta _h)^{-1}v, \nabla \eta _h\bigr ) = (v,\eta _h) \qquad \forall \, \eta _h \in {{\mathbb {V}}}_h\, . \end{aligned}$$

We are ready to present the space discretization of Scheme 3.1.

Scheme 4.1

For every \(1 \le j \le J\), find a \([{{\mathbb {V}}}_h]^2\)-valued r.v. \((X_h^j, w_h^j)\) such that \({{\mathbb {P}}}\)-a.s.

$$\begin{aligned} \begin{aligned}&(X_h^j-X_h^{j-1},\varphi _h)+k(\nabla w_h^{j},\nabla \varphi _h)=\varepsilon ^{\gamma }\bigl (g,\varphi _h \bigr )\Delta _j W \;\;\;\; \, \, \quad \, \forall \, \varphi _h \in {{\mathbb {V}}}_h\,,\\&\varepsilon (\nabla X_h^j,\nabla \psi _h)+\frac{1}{\varepsilon } \bigl (f(X_h^j),\psi _h \bigr )=(w^j_h,\psi _h) \qquad \qquad \quad \quad \ \, \forall \, \psi _h \in {{\mathbb {V}}}_h\, ,\\&X^0_h = {P_{{{\mathbb {L}}}^2}u_0^\varepsilon } \in \mathring{{\mathbb {V}}}_h\, . \end{aligned} \end{aligned}$$

For all \(1 \le j \le J\), the solution \(\{(X_h^j, w_h^j)\}_{1 \le j \le J}\) satisfies \((X_h^j,1) = 0\) \({{\mathbb {P}}}\)-a.s.

Claim 1 \(\{(X_h^j, w_h^j)\}_{1 \le j \le J}\) inherits all stability bounds in Lemma 3.2.

Proof

i’) In order to verify the corresponding version of (i) for \(\{{{\mathcal {E}}}(X^j_h)\}_{1 \le j\le J}\), we may choose \(\varphi _h = w_h^j(\omega )\) and \(\psi _h = [X^j_h - X^{j-1}_h](\omega )\) in Scheme 4.1, as in part (i) of the proof of Lemma 3.2. We then obtain a corresponding version of (3.1), and (3.2).

The next argument in the proof of Lemma 3.2 that leads to (3.3) may again be reproduced for Scheme 4.1 by choosing \(\varphi _h = (-\Delta _h)^{-1}[X^j_h - X^{j-1}_h](\omega )\), and using the definition of \((-\Delta _h)^{-1}\), as well as \(X^j_h, P_{{{\mathbb {L}}}^2} g \in {{\mathbb {L}}}^2_0\) \({\mathbb {P}}\)-a.s., such that

$$\begin{aligned}&\Vert \nabla (-\Delta _h)^{-1}[X^j_h - X^{j-1}_h]\Vert ^2 \\&\quad \le \Bigl ( k \Vert \nabla w^j_h\Vert + \varepsilon ^\gamma \Vert \nabla (-\Delta _h)^{-1}P_{{{\mathbb {L}}}^2} g\Vert \vert \Delta _j W\vert \Bigr )\Vert \nabla (-\Delta _h)^{-1}[X^j_h - X^{j-1}_h]\Vert \, , \end{aligned}$$

since \(\Vert \nabla (-\Delta _h)^{-1}P_{{{\mathbb {L}}}^2} g\Vert \le \Vert g \Vert \le C\).

To obtain the first identity in (3.5) for Scheme 4.1, we use \(\varepsilon ^\gamma (g, w^j_h)\Delta _jW = \varepsilon ^\gamma \bigl (P_{{{\mathbb {L}}}^2}g, w^j_h \bigr )\Delta _jW\), such that the second equation in Scheme 4.1 with \(\psi _h =P_{{{\mathbb {L}}}^2}g\) may be applied; as a consequence, g has to be replaced by \(P_{{\mathbb {L}}^2}g\) in the rest of equality (3.5). This modification leads to the term \(\Vert \nabla P_{{{\mathbb {L}}}^2}g\Vert \) in (3.6), which is again bounded by \(\Vert \nabla g\Vert \); the bound \(\Vert P_{{{\mathbb {L}}}^2}g\Vert _{{{\mathbb {L}}}^{\infty }} \le C\), which is required to bound the term \(A_{3,1}\) from (3.7), follows by an approximation result; cf. [7, Chapter 7]. The above steps then yield the estimate (3.8) for \(\{ (X^j_h, w^j_h)\}_{1 \le j \le J}\).

ii’), iii’), iv’) We can follow the argumentation in the proof of Lemma 3.2 without change. \(\square \)

Claim 2. Lemma 3.4 holds for \(\{(X_h^j, w_h^j)\}_{1 \le j \le J}\), i.e.: \(Z^j_h := X^j_h - X^j_{{\texttt {CH}};h}\) satisfies \({\mathbb {P}}\)-a.s.

$$\begin{aligned} \begin{aligned}&\max _{1\le j\le {\ell }}\Vert \nabla (-\Delta _h)^{-1}Z_h^j\Vert ^2+c{\varepsilon k}\sum _{i=1}^{{\ell }}\Vert \nabla Z_h^i\Vert ^2\\&\quad \le \frac{Ck}{\varepsilon }\sum _{i=1}^{{\ell }}\Vert Z_h^i\Vert _{{\mathbb {L}}^3}^3+ C\varepsilon ^\gamma \max _{1\le j\le {\ell }}|\sum _{i=1}^{j}((-\Delta _h)^{-1}P_{{{\mathbb {L}}}^2} g,Z_h^{i-1})\Delta _j W| \\&\qquad +C\varepsilon ^{2\gamma } \sum _{i=1}^{{\ell }}|\Delta _i W|^2 \,, \end{aligned} \end{aligned}$$

for all \({\ell } \le J\), provided that additionally

$$\begin{aligned} k\le C\min \{\varepsilon ^{{{\mathfrak {p}}}_{\texttt {CH}}}, h^{\widetilde{{\mathfrak {q}}}_{\texttt {CH}}} \}\,, \qquad {h \le C \min \{ 1, k^{2\beta }\} \varepsilon ^{\widetilde{{\mathfrak {p}}}_{\texttt {CH}}}} \end{aligned}$$
(4.1)

for any \(\beta > 0\), and \({{\mathfrak {p}}}_{\texttt {CH}}, \widetilde{{\mathfrak {q}}}_{\texttt {CH}}, \widetilde{{\mathfrak {p}}}_{\texttt {CH}} >0\). The exponents \({{\mathfrak {p}}}_{\texttt {CH}}, \widetilde{{\mathfrak {q}}}_{\texttt {CH}}, \widetilde{{\mathfrak {p}}}_{\texttt {CH}} >0\) are chosen in order to satisfy the assumptions of [16, Corollary 2] and [17, Theorem 3.2]. In particular (4.1) is required to obtain the fully discrete counterpart of Lemma 3.1, (iii)–(iv).

Remark 4.1

Requirement (4.1)\(_2\) comes from [16, Corollary 2, assumption 4)] (see also [17, Theorem 3.1, assumption 3)]). More precisely [16, Corollary 2] in the current setting is applied for \(\gamma _1=1\), \(\delta =1\), \(p=4\), \(\sigma _1=0\), \(N=2\) (where N is the spatial dimension) which yields the condition for \({\hat{\pi }}\) (defined in [16, Corollary 2]):

$$\begin{aligned} {\widehat{\pi }}(h, \varepsilon ,N)\le & {} C k^{\beta \frac{12-N}{4-N}} \bigl ( 1+ \ln \frac{1}{k}\bigr )^{- \frac{2N}{4-N}}\varepsilon ^{-\frac{4+N}{4-N}}\, . \end{aligned}$$

Hence, (4.1)\(_2\) is a consequence of the above condition for \(N=2\) where for simplicity we estimate \(|\ln k|^{-1} \ge k^{\beta }\) for sufficiently small k. Since \(\beta >0\) may be chosen arbitrarily small, the resulting condition does not severely restrict admissible \(h>0\).

Proof

Again, we here denote by \(\{(X_{{\texttt {CH}};h}^j, w_{{\texttt {CH}};h}^j)\}_{1 \le j \le J} \subset [{{\mathbb {V}}}_h]^2\) the solution of Scheme 4.1 for \(g \equiv 0\), whose stability and convergence properties are studied in [16, 17]. Under the assumption (4.1), [17, Theorem 3.2, (iii)] provides the bound

$$\begin{aligned} \max _{0 \le j \le J} \Vert X^j_{{\texttt {CH}};h}\Vert _{{\mathbb {L}}^{\infty }} \le C\, . \end{aligned}$$

We use this bound to adapt estimate (3.17) to the present setting and get

$$\begin{aligned} \begin{aligned} \bigl (f(X_h^j)-f(X_{{\texttt {CH}};h}^{j}),Z^j_h\bigr )&\ge \bigl (f'(X_{{\texttt {CH}};h}^{j})Z^j_h,Z^j_h \bigr ) - C\Vert Z^j_h\Vert _{{\mathbb {L}}^3}^3\\&\ge [1-{\varepsilon ^3}] \bigl (f'(X_{{\texttt {CH}};h}^{j})Z^j_h,Z^j_h\bigr )-C\Vert Z^j_h\Vert _{{\mathbb {L}}^3}^3 \\&\quad + {\varepsilon ^3} \bigl (f'(X_{{\texttt {CH}};h}^{j})Z^j_h,Z^j_h\bigr ) \, . \end{aligned} \end{aligned}$$

Step 2. of the proof of Lemma 3.4 involves the discrete spectral estimate (see Lemma 3.1, iv)) for \(\{X^j_{\texttt {CH}}\}_{j}\) to handle the leading term on the right-hand side of (3.17)—which we do not have for \(\{X^j_{{\texttt {CH}};h}\}_j\) in the present setting. Therefore, we perturb the leading term on the right-hand side of the last inequality, and use the \({\mathbb {L}}^\infty \)-bounds for \(X^j_{{\texttt {CH}}}\), \(X^j_{{\texttt {CH}};h}\), as well as the mean-value theorem to conclude

$$\begin{aligned} \begin{aligned} \bigl (f'(X_{{\texttt {CH}};h}^{j})Z^j_h,Z^j_h \bigr )&= \bigl (f'(X_{{\texttt {CH}}}^{j})Z^j_h,Z^j_h \bigr ) + \Bigl ( \bigl [f'(X_{{\texttt {CH}};h}^{j}) - f'(X_{{\texttt {CH}}}^{j})\bigr ]Z^j_h,Z^j_h \bigr ) \Bigr ) \\&\ge \bigl (f'(X_{{\texttt {CH}}}^{j})Z^j_h,Z^j_h \bigr ) - C \Vert Z^j_h\Vert ^3_{{{\mathbb {L}}}^3}\, . \end{aligned} \end{aligned}$$

The remaining steps in the proof of Lemma 3.4 now follow with only minor adjustments. \(\square \)

Claim 3. Additionally assume (4.1). Then Lemma 3.5 holds for \(\{ Z^j_h\}_j\), i.e.,

$$\begin{aligned} \mathrm{(i)}&\max _{1\le i\le J_\varepsilon }\Vert \nabla (-\Delta _h)^{-1}Z_h^i\Vert ^2+ {\varepsilon ^4} k\sum _{i=1}^{J_\varepsilon }\Vert \nabla Z_h^i\Vert ^2 \le C \varepsilon ^{\kappa _0} \qquad \text{ on } \Omega _{2;h}\, , \\ \mathrm{(ii)}&{{\mathbb {E}}}\Bigl [ \mathbb {1}_{\Omega _{2;h}} \Bigl (\max _{1\le i\le J_\varepsilon }\Vert \nabla (-\Delta _h)^{-1}Z_h^i\Vert ^2+\frac{\varepsilon ^4}{2} k\sum _{i=1}^{J_{\varepsilon ,h}}\Vert \nabla Z_h^i\Vert ^2 \Bigr )\Bigr ]\le C\max \bigl \{ \frac{k^2}{\varepsilon ^4},\\&\quad \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0}, \varepsilon ^{2\gamma }\bigr \}\, . \end{aligned}$$

Moreover, \({\mathbb {P}}[\Omega _{2;h}] \ge 1- \frac{C}{\varepsilon ^{\kappa _0}} \max \bigl \{ \frac{k^2}{\varepsilon ^4}, \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0},\varepsilon ^{2\gamma }\bigr \}\), where \({\Omega }_{2;h} := \bigl \{\omega \in \Omega ;\ \widetilde{\mathcal R}_{J_{\varepsilon ,h};h}(\omega ) \le \varepsilon ^{\kappa _0}\bigr \}\), for \(J_{\varepsilon ,h} :=\inf \bigl \{1 \le j \le J:\, \frac{k}{\varepsilon } \sum _{i=1}^{j} \Vert Z_h^{i}\Vert _{{\mathbb {L}}^3}^3 > \varepsilon ^{\sigma _0} \bigr \}\), and

$$\begin{aligned} \widetilde{{\mathcal {R}}}_{J_{\varepsilon ,h};h}:=\varepsilon ^\gamma \max _{1\le j\le J_{\varepsilon ,h}}\bigl |\sum _{i=1}^{j}\bigl ((-\Delta _h)^{-1}P_{{{\mathbb {L}}}^2} g,Z_h^{i-1}\bigr )\Delta _iW \bigr |+\varepsilon ^{2\gamma }\sum _{j=1}^{J_{\varepsilon ,h}}|\Delta W_j|^2 + \frac{k}{\varepsilon }\Vert Z_h^{J_{\varepsilon ,h}}\Vert _{{\mathbb {L}}^3}^3\, . \end{aligned}$$

Proof

The proof for Lemma 3.5 directly transfers to the present setting. \(\square \)

Claim 4. Lemma 3.6 remains valid for \(\{Z^j_h\}_h\) accordingly, provided that \(h \le C \varepsilon ^{\widetilde{{\mathfrak {p}}}_{\texttt {CH}}}\) and \(k \le C h^{\widetilde{{\mathfrak {q}}}_{\texttt {CH}}}\), i.e.: \(J_{\varepsilon ,h} = J\) for all \(\omega \in \Omega _{2;h}\).

Proof

We only need to adapt the interpolation argument for \({{\mathbb {L}}}^3\) to the present setting, starting with the estimate \(\Vert Z^i_h\Vert _{{{\mathbb {L}}}^3}^3 \le C \Vert Z^i_h\Vert _{{\mathbb H}^{-1}} \Vert \nabla Z_h^i\Vert ^2\). By the definition of the \({{\mathbb {H}}}^{-1}\)-norm, the definition and \({\mathbb H}^1\)-stability of the \({{{\mathbb {L}}}^2}\)-projection, and again the fact that \((Z^i_h,1) = 0\), we deduce

$$\begin{aligned} \begin{aligned} \Vert Z^i_h\Vert _{{{\mathbb {H}}}^{-1}}&= \sup _{\psi \in {{\mathbb {H}}}^1} \frac{(Z^i_h, P_{{{\mathbb {L}}}^2} \psi )}{\Vert \psi \Vert _{{\mathbb H}^1}} \le C \sup _{\psi \in {{\mathbb {H}}}^1} \frac{(Z^i_h, P_{{{\mathbb {L}}}^2} \psi )}{\Vert \nabla P_{{{\mathbb {L}}}^2} \psi \Vert } \\&= C \sup _{\psi \in {{\mathbb {H}}}^1} \frac{(\nabla ((-\Delta _h)^{-1}Z^i_h), \nabla P_{{{\mathbb {L}}}^2} \psi )}{\Vert \nabla P_{{{\mathbb {L}}}^2} \psi \Vert } \\&\le C \Vert \nabla (-\Delta _h)^{-1}Z^i_h\Vert \, . \end{aligned} \end{aligned}$$

\(\square \)

Next, we formulate a counterpart of Lemma 3.7 for the fully discrete numerical solution; as a consequence of the Claims 1 to 4 above the corollary can be proven analogically to Lemma 3.7 with the assumption (B) complemented by the additional restriction on the discretization parameters (4.1).

Corollary 4.2

Suppose (B) and (4.1). Then there exists \(C>0\) such that

$$\begin{aligned} {\mathbb {E}}\big [\max _{1\le j\le J}\Vert Z^j_h\Vert ^2_{{\mathbb {H}}^{-1}} + {\varepsilon ^4} k\sum _{i=1}^{J}\Vert \nabla Z^i_h\Vert ^2 \bigr ] \le \Bigl (\frac{C}{\varepsilon ^{\kappa _0}} \max \bigl \{ \frac{k^2}{\varepsilon ^4}, \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0},\varepsilon ^{2\gamma }\bigr \}\Bigr )^{\frac{1}{2}}\,.\end{aligned}$$

We are now ready to extend Theorem 3.8 to Scheme 4.1.

Theorem 4.3

Let u be the strong solution of (1.1), and \(\left\{ X^j_h;\, 1 \le j \le J\right\} \) the solution of Scheme 4.1. Assume (B) and (4.1). Then there exists \(C>0\) such that

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}\Big [\max _{1\le j\le J}\Vert u(t_j)-X^j\Vert _{{\mathbb {H}}^{-1}}^2 \Big ] \\&\quad \le C \max \Bigl \{\Bigl (\varepsilon ^{-\kappa _0} \max \bigl \{ \frac{k^2}{\varepsilon ^4}, \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0},\varepsilon ^{2\gamma }\bigr \}\Bigr )^{\frac{1}{2}}, \frac{k^{2-\beta }}{ \varepsilon ^{{{\mathfrak {m}}}_{\texttt {CH}}}} + \frac{h^4(1+k^{-\beta })}{ \varepsilon ^{\widetilde{\mathfrak m}_{\texttt {CH}}}} \Bigr \}\, , \end{aligned} \end{aligned}$$

where \({{\mathfrak {m}}}_{\texttt {CH}}, \widetilde{{\mathfrak {m}}}_{\texttt {CH}} > 0\).

We note that the exponents \({{\mathfrak {m}}}_{\texttt {CH}}, \widetilde{{\mathfrak {m}}}_{\texttt {CH}} > 0\) in the above estimate can be determined on closer inspection of [16, Corollary 2] on assuming (4.1). Furthermore, assumption (4.1), which is a simplified reformulation of assumption 4) in [16, Corollary 2], guarantees that \(\lim _{\varepsilon \downarrow 0}\left( \frac{k^{2-\beta }}{ \varepsilon ^{{{\mathfrak {m}}}_{\texttt {CH}}}} + \frac{{h^4 (1+k^{-\beta })}}{ \varepsilon ^{\widetilde{\mathfrak m}_{\texttt {CH}}}}\right) = 0\).

Proof

We split the error into three contributions,

$$\begin{aligned}&{\mathbb {E}}\big [\max _{1\le j \le J}\Vert u(t_j)-X^j_h\Vert _{{\mathbb {H}}^{-1}}^2\big ] \\&\quad \le 3{\mathbb {E}}\big [\max _{1\le j\le J}\Vert u(t_j)-u_{\texttt {CH}}(t_j)\Vert _{{\mathbb {H}}^{-1}}^2\big ] \\&\qquad +3\max _{1\le j\le J}\Vert u_{\texttt {CH}}(t_j)-X^j_{{\texttt {CH}};h}\Vert _{{\mathbb {H}}^{-1}}^2 +3{\mathbb {E}}\big [\max _{1\le j\le J}\Vert X^j_h-X^j_{{\texttt {CH}};h}\Vert _{{\mathbb {H}}^{-1}}^2\big ]. \end{aligned}$$

The first term is bounded by \(C\varepsilon ^{\frac{2}{3}\sigma _0}\) as in Theorem 3.8. The second term is bounded by \(C \bigl (\frac{k^{2-\beta }}{ \varepsilon ^{{{\mathfrak {m}}}_{\texttt {CH}}}} + \frac{{h^4}(1+k^{-\beta })}{ \varepsilon ^{\widetilde{\mathfrak m}_{\texttt {CH}}}}\bigr )\) thanks to [16, Corollary 2] (stated here in a simplified form, cf. Remark 4.1), provided assumption (4.1) holds. The last term is bounded by

$$\begin{aligned} \Bigl (\frac{C}{\varepsilon ^{\kappa _0}} \max \bigl \{ \frac{k^2}{\varepsilon ^4}, \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0},\varepsilon ^{2\gamma }\bigr \}\Bigr )^{\frac{1}{2}}\,, \end{aligned}$$

thanks to Corollary 4.2. \(\square \)

5 Sharp-interface limit

In this section, we show the convergence of iterates \(\{X^j\}_{j=1}^J\) of Scheme 3.1 to the solution of a sharp interface problem. Recall that in the absence of noise, the sharp interface limit of (1.1) is given by the following deterministic Hele–Shaw/Mullins–Sekerka problem: Find \(v_{\texttt {MS}} : [0,T] \times {\mathcal {D}} \rightarrow {\mathbb {R}}\) and the interface \(\big \{\Gamma ^{{\texttt {MS}}}_t;\, 0 \le t \le T\big \}\) such that for all \(t\in (0,T]\) the following conditions hold:

$$\begin{aligned} - \Delta v_{\texttt {MS}}&= 0&\qquad \text{ in } \ {\mathcal {D}} \setminus \Gamma ^{\texttt {MS}}_t\,, \end{aligned}$$
(5.1a)
$$\begin{aligned} \left[ \partial _{n_\Gamma }v_{\texttt {MS}}\right] _{\Gamma ^{\texttt {MS}}_t}&= - 2\,{\mathcal {V}}&\qquad \text{ on } \ \Gamma ^{\texttt {MS}}_t\,, \end{aligned}$$
(5.1b)
$$\begin{aligned} v_{\texttt {MS}}&= \alpha \,\varkappa&\qquad \text{ on } \ \Gamma ^{\texttt {MS}}_t\,, \end{aligned}$$
(5.1c)
$$\begin{aligned} \partial _{n}v_{\texttt {MS}}&= 0&\qquad \text{ on } \partial {\mathcal {D}}\,, \end{aligned}$$
(5.1d)
$$\begin{aligned} \Gamma ^{\texttt {MS}}_0&= \Gamma _{00} \,, \end{aligned}$$
(5.1e)

where \(\varkappa \) is the curvature of the evolving interface \(\Gamma ^{\texttt {MS}}_t\), and \({\mathcal {V}}\) is the velocity in the direction of its normal \({{n_\Gamma }}\), as well as \([\frac{\partial v_{\texttt {MS}}}{\partial {{n_\Gamma }}}]_{\Gamma ^{\texttt {MS}}_t}({z}) = (\frac{\partial v_{{\texttt {MS}},+}}{\partial { {n_\Gamma }}} - \frac{\partial v_{{\texttt {MS}},-}}{\partial {{n_\Gamma }}})({z})\) for all \({z}\in \Gamma ^{\texttt {MS}}_t\). The constant in (5.1c) is chosen as \(\alpha = \tfrac{1}{2}\,c_F\), where \(c_F= \int _{-1}^1 \sqrt{2\,F(s)}\;{\mathrm{d}}s = \tfrac{1}{3}\,2^\frac{3}{2}\), and F is the double-well potential; cf. [1] for a further discussion of the model.

Below, we show that iterates \(\{X^j\}_{j=1}^J\) of Scheme 3.1 converge to the limiting Mullins–Sekerka problem (5.1); see Theorem 5.7 for a precise specification of the convergence result. For this purpose, we need sharper stability and convergence results than those available from Sect. 3, which also requires to tighten the assumptions (B), and so to further restrict admissible choices of \(\gamma >0\). We note that the stronger stability estimates below are derived using the (analytically) strong formulation of Scheme 3.1, i.e., \({\mathbb {P}}\)-a.s., a.e. in \({\mathcal {D}}\):

$$\begin{aligned} X^j-X^{j-1} - k\Delta w^{j}&= \varepsilon ^{\gamma }g\Delta _j W\,, \nonumber \\ - \varepsilon \Delta X^j + \frac{1}{\varepsilon }f(X^j)&= w^j\,, \end{aligned}$$
(5.2)

and \(\partial _{{n}} X^j = \partial _{{n}} w^j = 0\) a.e. on \(\partial {\mathcal {D}}\). The derivation can be justified rigorously (cf. Lemma 3.1, ii)) by the regularity of the Neumann Laplace operator, cf. [24, p. 217, Thm. 4].

Lemma 5.1

Assume (B). For every \(2<p<3\), there exists \(C \equiv C(p)>0\) such that the solution \(\{X^j\}_{j=1}^J\) of Scheme 3.1 satisfies

$$\begin{aligned} {{\mathbb {E}}}\bigl [ \max _{1 \le j \le J} \Vert X^j\Vert ^p_{{\mathbb {L}}^{\infty }}\bigr ] \le {C\varepsilon ^{1-p} k^{\frac{2-p}{2}}\,.} \end{aligned}$$

Proof

1. The second equation in Scheme 3.1 (i.e., (5.2)\(_2\)) implies \(\sqrt{k}\Vert \Delta X^j(\omega )\Vert \le 2\frac{\sqrt{k}}{\varepsilon }\Vert w^j(\omega )\Vert + 2\frac{\sqrt{k}}{\varepsilon ^2}\Vert f(X^j(\omega ))\Vert \), for \(\omega \in \Omega \). Then Lemma 3.2, (ii), and Gagliardo–Nirenberg and Poincaré inequalities imply

$$\begin{aligned}&\displaystyle {\mathbb {E}}\big [\max _{1\le j \le J} \sqrt{k}\Vert \Delta X^j\Vert \big ]\nonumber \\&\quad \le \displaystyle \frac{C}{\varepsilon }{\mathbb {E}}\Big [\Big (k\sum _{j=1}^{J}\Vert \nabla w^j\Vert ^2\Big )^{1/2}\Bigr ] + \frac{C\sqrt{k}}{\varepsilon ^2}{\mathbb {E}}\left[ \max _{1\le j \le J} \left( \Vert X^j\Vert _{{\mathbb {L}}^6}^3 + \Vert X^j\Vert \right) \right] \nonumber \\&\quad \le \displaystyle \frac{C}{\varepsilon } + \frac{C\sqrt{k}}{\varepsilon ^2}{\mathbb {E}}\Big [ \max _{1\le j \le J} \Vert X^j\Vert _{{\mathbb {L}}^4}^2\Vert \nabla X^j\Vert \Big ] \nonumber \\&\quad \le \displaystyle \frac{C}{\varepsilon } + \frac{C\sqrt{k}}{\varepsilon ^2}{\mathbb {E}}\Big [\max _{1\le j \le J} \Vert X^j\Vert _{{\mathbb {L}}^4}^4\Big ]^{1/2}{\mathbb {E}}\Big [\max _{1\le j \le J}\Vert \nabla X^j\Vert ^2\Big ]^{1/2}\, , \end{aligned}$$
(5.3)

which is bounded by \(C \varepsilon ^{{-1}}\) for \(k \le \varepsilon ^4\) (which is guaranteed by assumption (B)).

2. Since \({{\mathbb {W}}}^{1,p} \hookrightarrow {\mathbb {L}}^{\infty }\) (\(p>2\)), by Gagliardo–Nirenberg inequality \(\Vert \cdot \Vert _{{\mathbb {L}}^p}\le C_p\Vert \cdot \Vert _{{\mathbb {L}}^2}^{\frac{2}{p}} \Vert \cdot \Vert _{{\mathbb {H}}^1}^{\frac{p-2}{p}}\) (\(d=2\), \(p>2\)), Hölder inequality, Lemma 3.2, iv), and step 1., we get for \(2<p<3\)

$$\begin{aligned} {\mathbb {E}}\Big [{\max _{1\le j\le J}} \Vert X^j\Vert _{{\mathbb {L}}^\infty }^p\Big ]\le & {} { C {\mathbb {E}}\Big [\max _{1\le j \le J}\Vert \nabla X^j\Vert _{{\mathbb {L}}^p}^p\Big ] \le C {\mathbb {E}}\Big [\max _{1\le j \le J}\Vert \nabla X^j\Vert ^2\Vert \Delta X^j\Vert ^{p-2}\Big ] } \\\le & {} {C{\mathbb {E}}\Big [\max _{1\le j \le J}\Vert \nabla X^j\Vert ^{\frac{2}{3-p}}\Big ]^{3-p} {\mathbb {E}}\Big [\max _{1\le j \le J}\Vert \Delta X^j\Vert \Big ]^{p-2}} \\\le & {} {C\varepsilon ^{-1}{\mathbb {E}}\Big [\varepsilon ^2\max _{1\le j \le J}\Vert \nabla X^j\Vert ^{4}\Big ]^{\frac{3-p}{2(3-p)}} k^{-\frac{p-2}{2}} {\mathbb {E}}\Big [\sqrt{k}\max _{1\le j \le J}\Vert \Delta X^j\Vert \Big ]^{p-2}} \\\le & {} {C\varepsilon ^{-1} k^{\frac{2-p}{2}}\varepsilon ^{-(p-2)} = C\varepsilon ^{1-p} k^{-\frac{p-2}{2}}} \end{aligned}$$

\(\square \)

The following lemma sharpens the statement of Lemma 3.4 for iterates \(\{Z^j\}_{j=1}^J\), where \(Z^j := X^j - X^j_{\texttt {CH}}\). It involves the parameter \({{\mathfrak {n}}}_{\texttt {CH}}>0\) from Lemma 3.1, (ii).

Lemma 5.2

Suppose (B). There exists \(C >0\) such that

$$\begin{aligned}&{{\mathbb {E}}}\bigl [ \max _{1 \le j \le J}\Vert Z^j\Vert ^2\bigr ] + {{\mathbb {E}}}\Bigl [ \sum _{j=1}^J \Vert Z^j - Z^{j-1}\Vert ^2 + \varepsilon k \sum _{j=1}^J \Vert \Delta Z^j\Vert ^2\Bigr ] \\&\quad \quad + \frac{k}{\varepsilon } \sum _{j=1}^J {{\mathbb {E}}}\Bigl [ \Vert Z^j \nabla Z^j\Vert ^2 + \Vert X^j_{\texttt {CH}} \nabla Z^j\Vert ^2 \Bigr ] \le {{{\mathcal {F}}}_1(k,\varepsilon ; \sigma _0, \kappa _0, \gamma )} := \\&\quad := C \max \Bigl \{ \Bigl (\frac{\max \bigl \{ \frac{k^2}{\varepsilon ^4}, \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0},\varepsilon ^{2\gamma }\bigr \}}{\varepsilon ^{\kappa _0+10 + 4 {{\mathfrak {n}}}_{\texttt {CH}}}} \Bigr )^{\frac{1}{2}}, \Bigl (\frac{\max \bigl \{ \frac{k^2}{\varepsilon ^4}, \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0},\varepsilon ^{2\gamma }\bigr \}}{\varepsilon ^{\kappa _0+16}} \Bigr )^{\frac{1}{4}} \Bigr \} \, . \end{aligned}$$

In order to establish convergence to zero (for \(\varepsilon \downarrow 0\)) of the right-hand side in the inequality of the lemma, we need to impose a stronger assumptions than (B); for simplicity, we assume \({{\mathfrak {n}}}_{\texttt {CH}} \ge \frac{3}{2}\) in Lemma 3.1:

(C\(_1\)):

Assume (B), and that \((\sigma _0, \kappa _0, \gamma )\) also satisfies

$$\begin{aligned}\sigma _0>10 + \kappa _0 + 4 {{\mathfrak {n}}}_{\texttt {CH}}\,, \qquad \gamma > \max \bigl \{\frac{2\kappa _0+19+8 {{\mathfrak {n}}}_{\texttt {CH}}}{3} , \frac{\kappa _0 + 10 + 4{{\mathfrak {n}}}_{\texttt {CH}}}{2} \bigr \}\, .\end{aligned}$$

For sufficiently small \(\varepsilon _0 \equiv (\sigma _0,\kappa _0) >0\) and \({{\mathfrak {l}}}_{\texttt {CH}} \ge 3\) from Lemma 3.1, and arbitrary \(0< \beta < \frac{1}{2}\) the time-step satisfies

$$\begin{aligned} k \le C \min \bigl \{\varepsilon ^{{{\mathfrak {l}}}_{\texttt {CH}}}, \varepsilon ^{7+\frac{\kappa _0}{2}+2 {{\mathfrak {n}}}_{\texttt {CH}} + \beta }\bigr \} \qquad \forall \, \varepsilon \in (0,\varepsilon _0) \,. \end{aligned}$$

Compared to assumption (B), only larger values of \(\sigma _0\), and consequently larger values of \(\gamma \) are admitted, as well as smaller time-steps k.

Proof

1. We subtract Scheme 3.1 [in strong form (5.2)] for \(g\not \equiv 0\) and \(g\equiv 0\), respectively, fix \(\omega \in \Omega \), and multiply the first error equation with \(Z^j(\omega )\) and the second equation with \(-\Delta Z^j(\omega )\). After subtracting the resulting second equation from the first one and using that \((-\Delta w^j, Z^j) = ( w^j, -\Delta Z^j)\) we obtain

$$\begin{aligned}&\frac{1}{2} \bigl ( \Vert Z^j\Vert ^2 - \Vert Z^{j-1}\Vert ^2 + \Vert Z^j - Z^{j-1} \Vert ^2\bigr ) + \varepsilon k \Vert \Delta Z^j\Vert ^2 \nonumber \\&\qquad + \frac{k}{\varepsilon } \bigl ( f(X^j)-f(X^j_{\texttt {CH}}), -\Delta Z^j\bigr ) = \varepsilon ^{\gamma } \bigl ( g,Z^j \bigr ) \Delta _j W\, . \end{aligned}$$
(5.4)

We estimate the right-hand side above as

$$\begin{aligned} \varepsilon ^{\gamma } \bigl ( g,Z^j \bigr ) \Delta _j W= & {} \varepsilon ^{\gamma } \bigl ( g, Z^j-Z^{j-1}\bigr ) \Delta _j W + \varepsilon ^{\gamma } \bigl (g, Z^{j-1}\bigr ) \Delta _j W \\\le & {} \frac{1}{4} \Vert Z^j-Z^{j-1}\Vert ^2 + \varepsilon ^{2\gamma } \Vert g\Vert ^2 \vert \Delta _j W\vert ^2 +\varepsilon ^{\gamma } \bigl ( g, Z^{j-1}\bigr ) \Delta _j W\, . \end{aligned}$$

We restate the nonlinear term in (5.4) as

$$\begin{aligned}&\frac{k}{\varepsilon } \bigl ( f(X^j)-f(X^j_{\texttt {CH}}), -\Delta Z^j\bigr ) \\&\quad = \frac{k}{\varepsilon } \Bigl ( \vert X^j\vert ^2 X^j - (\vert X^j_{\texttt {CH}} \vert ^2 X^j-\vert X^j_{\texttt {CH}} \vert ^2 X^j) - \vert X_{\texttt {CH}}^j\vert ^2 X^j_{\texttt {CH}}, -\Delta Z^j\Bigr ) \\&\qquad - {\frac{k}{\varepsilon } (Z^j, -\Delta Z^j)} \\&\quad = \frac{k}{\varepsilon } \Bigl ( Z^j [Z^j + 2 X^j_{\texttt {CH}}] X^j { + \vert X^j_{\texttt {CH}}\vert ^2 Z^j}, -\Delta Z^j\Bigr ) - {\frac{k}{\varepsilon } \Vert \nabla Z^j\Vert ^2} \\&\quad = \frac{k}{\varepsilon } \bigl ( \vert Z^j\vert ^2 Z^j, -\Delta Z^j \bigr ) -{\frac{k}{\varepsilon } \Vert \nabla Z^j\Vert ^2} \\&\qquad + \frac{3k}{\varepsilon } \bigl ( \vert Z^j\vert ^2 X^j_{\texttt {CH}} , -\Delta Z^j\bigr ) + \frac{{3}k}{\varepsilon } \bigl (\vert X^j_{\texttt {CH}}\vert ^2 Z^j , -\Delta Z^j\bigr ) \\&\quad =: { \frac{3k}{\varepsilon } \Vert Z^j \nabla Z^j\Vert ^2} -{\frac{k}{\varepsilon } \Vert \nabla Z^j\Vert ^2} + \texttt {I}_{1} + \texttt {I}_{2}\, , \end{aligned}$$

where in the last step we used integration by parts \(\bigl ( \vert Z^j\vert ^2 Z^j, -\Delta Z^j \bigr ) = 3\Vert Z^j \nabla Z^j\Vert ^2\).

Next, we apply integration by parts to \(\texttt {I}_1\), \(\texttt {I}_2\) to estimate

$$\begin{aligned} \texttt {I}_{1}:= & {} \frac{3k}{\varepsilon } \bigl ( \vert Z^j\vert ^2 X^j_{\texttt {CH}} , -\Delta Z^j\bigr ) = \frac{3k}{\varepsilon } \Bigl [2\bigl ( Z^j \nabla Z^j X^j_{\texttt {CH}}, \nabla Z^j\bigr ) + \bigl ( Z^j \nabla Z^j, Z^j \nabla X^j_{\texttt {CH}}\bigr ) \Bigr ] \\\ge & {} - \frac{2k}{\varepsilon } \Bigl [C {\Vert X^j_{\texttt {CH}}\Vert _{{{\mathbb {L}}}^{\infty }}} {\Vert \nabla Z^j\Vert } + {\Vert \nabla X^j_{\texttt {CH}}\Vert _{{{\mathbb {L}}}^4}} {\Vert Z^j\Vert _{{{\mathbb {L}}}^4}}\Bigr ] \Vert Z^j \nabla Z^j\Vert \,, \\ \texttt {I}_{2}:= & {} \frac{{3}k}{\varepsilon } \bigl (\vert X^j_{\texttt {CH}}\vert ^2 Z^j , -\Delta Z^j\bigr ) \ge {\frac{3k}{\varepsilon } \Vert X^j_{\texttt {CH}} \nabla Z^j\Vert ^2} - \frac{6k}{\varepsilon } {\Vert Z^j}\Vert _{{{\mathbb {L}}}^{4}} {\Vert \nabla X^j_{\texttt {CH}}\Vert _{{{\mathbb {L}}}^4}} \Vert X^j_{\texttt {CH}}\nabla Z^j \Vert \, . \end{aligned}$$

Hence, using Poincaré, Sobolev and Young’s inequalities, Lemma 3.1, (ii), and assumption (B), we deduce that

$$\begin{aligned} \frac{k}{\varepsilon } \bigl ( f(X^j)-f(X^j_{\texttt {CH}}), -\Delta Z^j\bigr ) \ge {\frac{k}{2\varepsilon } \bigl [\Vert Z^j \nabla Z^j\Vert ^2 + \Vert X^j_{\texttt {CH}} \nabla Z^j\Vert ^2} \bigr ] - \frac{Ck}{\varepsilon ^{1+2{{\mathfrak {n}}}_{\texttt {CH}}}} {\Vert \nabla Z^j\Vert ^2}\, . \end{aligned}$$

2. We insert these bounds into (5.4), sum up over all time-steps, take \(\max _{j\le J}\) and expectations,

$$\begin{aligned}&\frac{1}{2}{{\mathbb {E}}}\left[ \max _{1 \le j \le J} \Vert Z^j\Vert ^2\right] + {{\mathbb {E}}}\left[ \sum _{j=1}^J \frac{1}{4} \Vert Z^j - Z^{j-1}\Vert ^2 + \varepsilon k \sum _{j=1}^J \Vert \Delta Z^j\Vert ^2\right] \nonumber \\&\quad \quad + \frac{k}{2\varepsilon } \sum _{j=1}^J {{\mathbb {E}}}\left[ { \Vert Z^j \nabla Z^j \Vert ^2} + \Vert X^j_{\texttt {CH}} \nabla Z^j\Vert ^2 \right] \nonumber \\&\quad \le {\frac{C k}{\varepsilon ^{1+{2{\mathfrak {n}}}_{\texttt {CH}}}} {\sum _{j=1}^J {{\mathbb {E}}}\bigl [ \Vert \nabla Z^j \Vert ^2\bigr ]}} + \varepsilon ^{\gamma }{\mathbb {E}}\left[ \max _{1 \le j \le J} \sum _{i=1}^j\bigl ( g, Z^{i-1}\bigr ) \Delta _i W\right] + C \varepsilon ^{2\gamma }\,.\qquad \end{aligned}$$
(5.5)

We use the discrete BDG-inequality (Lemma 3.3) and the Poincaré inequality to estimate the last term as follows,

$$\begin{aligned} \displaystyle \varepsilon ^{\gamma }{\mathbb {E}}\Big [\max _{1 \le j \le J} \sum _{i=1}^j \Bigl ( g, Z^{i-1}\bigr ) \Delta _i W\Bigr )\Big ] \le C\varepsilon ^{\gamma } \Vert g\Vert _{{\mathbb {L}}^\infty } {\mathbb {E}}\Big [ k \sum _{j=1}^J \Vert \nabla Z^{j-1}\Vert ^2\Big ]^{\frac{1}{2}}\,. \end{aligned}$$

We now use Lemma 3.7 to bound the right-hand side of (5.5). \(\square \)

A crucial step in this section is to establish convergence of \(\max _{1 \le j \le J}\Vert Z^j\Vert _{{{\mathbb {L}}}^{\infty }}\) for \(\varepsilon \downarrow 0\); it turns out that this can only be validated on large subsets of \(\Omega \), which motivates the introduction of the following (family of) subsets: For every \(2< p < 3\), we define

$$\begin{aligned} \kappa \equiv \kappa _p := {\Bigl [ \varepsilon ^{1-p} k^{\frac{2-p}{2}} \ln \bigl (\varepsilon ^{1-p} \bigr )\Bigr ]^{\frac{1}{p}}} \, , \end{aligned}$$
(5.6)

and the sequence of sets \(\{ {\Omega }_{\kappa , j}\}_{j=1}^J \subset \Omega \) via

$$\begin{aligned} {\Omega }_{\kappa , j} = \bigl \{ \omega \in \Omega : \, \max _{1 \le \ell \le j} \Vert X^\ell \Vert _{{{\mathbb {L}}}^{\infty }} \le \kappa \bigr \} \qquad (\kappa >0)\, . \end{aligned}$$
(5.7)

Note that \({\Omega }_{\kappa , j} \subset {\Omega }_{\kappa , j-1}\). Markov’s inequality yields that

$$\begin{aligned} {{\mathbb {P}}}\bigl [{\Omega }_{\kappa ,j}\bigr ] \ge 1- \frac{{\mathbb E}[\max _{1 \le \ell \le j} \Vert X^\ell \Vert ^p_{{\mathbb {L}}^{\infty }}]}{\kappa ^p}\, . \end{aligned}$$
(5.8)

Clearly, \(\displaystyle {\lim _{\varepsilon \downarrow 0}}\min _{1\le j\le J}{\mathbb {P}}[\Omega _{\kappa ,j}] = 1\) by Lemma 5.1.

We use Lemma 5.2 to show a local error estimate.

Lemma 5.3

Assume (B) and \(2<p<3\). Then there exists \(C>0\) such that

$$\begin{aligned}&{{\mathbb {E}}}\bigl [ { \max _{0\le j \le J}} \mathbb {1}_{{\Omega }_{\kappa ,j}}\Vert \nabla Z^j\Vert ^2\bigr ] \le {{\mathcal {F}}}_2({k, \varepsilon }; \sigma _0, \kappa _0, \gamma ) := \\&\quad := C\max \Bigl \{ \frac{(1+\kappa ^2)}{\varepsilon ^2} {{{\mathcal {F}}}_1 \bigl (k, \varepsilon ; \sigma _0, \kappa _0, \gamma \bigr )} ,\frac{(1+ \kappa ^2)}{\varepsilon ^{7+ 2{{\mathfrak {n}}}_{\texttt {CH}}}} \Bigl (\frac{1}{\varepsilon ^{\kappa _0}} \max \bigl \{ \frac{k^2}{\varepsilon ^4}, \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0},\varepsilon ^{2\gamma }\bigr \}\Bigr )^{\frac{1}{4}}\Bigr \}. \end{aligned}$$

In order to establish convergence to zero (for \(\varepsilon \downarrow 0\)) of the right-hand side in the inequality of the lemma, we impose again a stronger assumptions than (C\(_1\)):

(C\(_2\)):

Assume (C\(_1\)), and that \((\sigma _0, \kappa _0, \gamma )\), and k satisfy

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} {{\mathcal {F}}}_2({k, \varepsilon }; \sigma _0, \kappa _0, \gamma ) =0\,. \end{aligned}$$
(5.9)

Remark 5.4

A strategy to identify admissible quadruples \((\sigma _0, \kappa _0, \gamma , k)\) which meet assumption (C\(_2\)) is as follows:

  1. (1)

    assumption (C\(_1\)) establishes \(\lim _{\varepsilon \downarrow 0} {{{\mathcal {F}}}_1(k,\varepsilon ; \sigma _0, \kappa _0, \gamma )} = 0\), which appears as a factor in the first term on the right-hand side in Lemma 5.3.

  2. (2)

    the leading factor in \({\mathcal {F}}_2\) is \(\frac{\kappa ^2}{\varepsilon ^2} \equiv \frac{\kappa _p^2}{\varepsilon ^2} \le \varepsilon ^{\frac{1-3p}{p}} \bigl \vert \ln (\varepsilon ^{1-p} )\bigr \vert ^{\frac{2}{p}} k^{\frac{2-p}{p}}\), for \(2<p<3\) via (5.6). To meet (5.9) therefore additionally requires for some p>2

    $$\begin{aligned} {k^{\frac{2-p}{p}}} {{{\mathcal {F}}}_1(k,\varepsilon ; \sigma _0, \kappa _0, \gamma )}\varepsilon ^{{\frac{1-3p}{p}}} \bigl \vert \ln (\varepsilon ^{1-p} )\bigr \vert ^{\frac{2}{p}} \rightarrow 0 \qquad (\varepsilon \downarrow 0)\, , \end{aligned}$$
    (5.10)

    and hence

    $$\begin{aligned} \Bigl [{{{\mathcal {F}}}_1(k,\varepsilon ; \sigma _0, \kappa _0, \gamma )}\varepsilon ^{ {\frac{1-3p}{p}}} \bigl \vert \ln (\varepsilon ^{1-p} )\bigr \vert ^{\frac{2}{p}}\Bigr ]^{{\frac{p}{p-2}}} = o(k)\, . \end{aligned}$$
    (5.11)

    A proper scenario is \(k = \varepsilon ^{\alpha }\) for some \(\alpha >0\) to meet assumption \(({\mathbf{C}}_1\mathbf{)}\). We then sharpen this choice of the time-step to \(k = \varepsilon ^{{\widetilde{\alpha }}}\) for some \({\widetilde{\alpha }} \ge \alpha >0\) to have

    $$\begin{aligned}{{{\mathcal {F}}}_1(k,\varepsilon ; \sigma _0, \kappa _0, \gamma )}{\varepsilon ^{\frac{1-3p}{p}}} \ln ^{\frac{2}{p}}\bigl (\varepsilon ^{1-p} \bigr ) \le \varepsilon ^\eta \end{aligned}$$

    for an arbitrary \(\eta > 0\). We now choose \(2<p\), s.t. \({\frac{p}{p-2}} \gg 0\) is sufficiently large to meet (5.11).

  3. (3)

    We may proceed analogously for the second term on the right-hand side in Lemma 5.3.

Proof

We subtract Scheme 3.1 for \(g\not \equiv 0\) and \(g\equiv 0\) for a fixed \(\omega \in \Omega \), and multiply the first error equation with \(-\Delta Z^j(\omega )\), and the second with \(\Delta ^2 Z^j(\omega )\). We integrate by parts in the nonlinear term and obtain

$$\begin{aligned}&\frac{1}{2} \bigl ( \Vert \nabla Z^j\Vert ^2 - \Vert \nabla Z^{j-1}\Vert ^2 + \Vert \nabla [Z^j - Z^{j-1}]\Vert ^2\bigr ) + \varepsilon k \Vert \nabla \Delta Z^j\Vert ^2 \nonumber \\&\quad = {\frac{k}{\varepsilon } \bigl ( \nabla [f(X^j)-f(X^j_{\texttt {CH}})], \nabla \Delta Z^j\bigr )} + \varepsilon ^{\gamma } {\bigl ( g, -\Delta Z^j\bigr ) \Delta _j W} =: \texttt {I} + \texttt {II} .\nonumber \\ \end{aligned}$$
(5.12)

We proceed as in the proof of Lemma 5.2 and rewrite the nonlinearity on the right-hand side as

$$\begin{aligned} \texttt {I}= & {} \frac{k}{\varepsilon } \bigl ( \nabla [\vert Z^j\vert ^2 Z^j], \nabla \Delta Z^j \bigr ) + \frac{3k}{\varepsilon } \bigl ( \nabla [\vert Z^j\vert ^2 X^j_{\texttt {CH}}] , \nabla \Delta Z^j\bigr ) \\&+ \frac{{3}k}{\varepsilon } \bigl (\nabla [\vert X^j_{\texttt {CH}}\vert ^2 Z^j], \nabla \Delta Z^j\bigr ) + \frac{k}{\varepsilon }\Vert \Delta Z^j\Vert ^2 \\=: & {} \texttt {I}_1 + \texttt {I}_2 + \texttt {I}_3 + \frac{k}{\varepsilon }\Vert \Delta Z^j\Vert ^2 \, . \end{aligned}$$

We estimate

$$\begin{aligned} \texttt {I}_1\le & {} \frac{Ck}{\varepsilon ^3} {\Vert Z^j\Vert _{{\mathbb {L}}^{\infty }}^2} {\Vert Z^j \nabla Z^j\Vert ^2} + \frac{\varepsilon k}{8} \Vert \nabla \Delta Z^j\Vert ^2\,, \\ \texttt {I}_2\le & {} \frac{Ck}{\varepsilon ^3} \bigl ( {\Vert Z^j\Vert ^2_{{{\mathbb {L}}}^{\infty }}} {\Vert X^j_{\texttt {CH}}\Vert ^2_{{{\mathbb {L}}}^{\infty }}} \Vert \nabla Z^j\Vert ^2 + {\Vert Z^j\Vert ^2_{{{\mathbb {L}}}^\infty }} \Vert Z^j\Vert ^2_{{{\mathbb {L}}}^4} {\Vert \nabla X^j_{\texttt {CH}}\Vert ^2_{{{\mathbb {L}}}^4}}\bigr ) + \frac{\varepsilon k}{8} \Vert \nabla \Delta Z^j\Vert ^2\,, \\ \texttt {I}_3\le & {} \frac{Ck}{\varepsilon ^3} \bigl ({\Vert X^j_{\texttt {CH}}\Vert ^4_{{{\mathbb {L}}}^{\infty }}}\Vert \nabla Z^j\Vert ^2_{{\mathbb {L}}^{2}} + {\Vert X^j_{\texttt {CH}}\Vert ^2_{{{\mathbb {L}}}^{\infty }} \Vert \nabla X^j_{\texttt {CH}}\Vert ^2_{{{\mathbb {L}}}^{4}}} \Vert Z^j\Vert ^2_{{{\mathbb {L}}}^4}\bigr ) + \frac{\varepsilon k}{8} \Vert \nabla \Delta Z^j\Vert ^2\, . \end{aligned}$$

We estimate \(\sum _{\ell =1}^3\texttt {I}_{\ell }\) on \(\Omega _{\kappa ,j}\) via Lemma 3.1, (ii)-(iii) and the embedding \({\mathbb H}^1 \hookrightarrow {{\mathbb {L}}}^{4}\) on recalling (5.7)

$$\begin{aligned} \displaystyle \mathbb {1}_{\Omega _{\kappa ,j}} \sum _{\ell =1}^3\texttt {I}_{\ell }\le \displaystyle \mathbb {1}_{\Omega _{\kappa ,j}}\big \{ \frac{\varepsilon k}{2} \Vert \nabla \Delta Z^j\Vert ^2 +\frac{C(1+{\kappa ^2})k}{\varepsilon ^3} {\Vert Z^j \nabla Z^j\Vert ^2} \displaystyle \displaystyle + \frac{C(1+ {\kappa ^2}) k}{\varepsilon ^{3+2{\mathfrak {n}}_{\texttt {CH}}}} \Vert \nabla Z^j\Vert ^2\big \}\, . \end{aligned}$$
(5.13)

We multiply (5.12) by \(\mathbb {1}_{\Omega _{\kappa ,j}}\), sum up for \(1 \le i \le j\), take \(\max _{1\le j \le J}\) and expectation, employ the identity (recall, \(\mathbb {1}_{\Omega _{\kappa ,j-1}} - \mathbb {1}_{\Omega _{\kappa ,j}}\ge 0\))

$$\begin{aligned}&\frac{1}{2} {{\mathbb {E}}}\Bigl [ {\max _{0\le j\le J}\sum _{i=1}^j}\Big ( \mathbb {1}_{\Omega _{\kappa ,i}} \bigl ( \Vert \nabla Z^j\Vert ^2 - \Vert \nabla Z^{i-1}\Vert ^2\bigr ) - \big (\mathbb {1}_{\Omega _{\kappa ,i-1}}\Vert \nabla Z^{i-1}\Vert ^2-\mathbb {1}_{\Omega _{\kappa ,i-1}}\Vert \nabla Z^{i-1}\Vert ^2\big )\Big ) \Bigr ] \\&\quad = \frac{1}{2} {{\mathbb {E}}}\Bigl [ \max _{0\le j\le J} \mathbb {1}_{\Omega _{\kappa ,j}}\Vert \nabla Z^j\Vert ^2\Bigr ] + {\frac{1}{2} \sum _{j=1}^J {{\mathbb {E}}}\Bigl [ {\bigl (\mathbb {1}_{\Omega _{\kappa ,j-1}} - \mathbb {1}_{\Omega _{\kappa ,j}}\bigr )} \Vert \nabla Z^{j-1}\Vert ^2\Bigr ]}\,, \end{aligned}$$

use Lemmata 5.2 and 3.7 to estimate (5.13) and obtain

$$\begin{aligned}&\frac{1}{2} {{\mathbb {E}}}\Bigl [ {\max _{0\le j\le J}} \mathbb {1}_{\Omega _{\kappa ,j}}\Vert \nabla Z^j\Vert ^2\Bigr ] + \frac{1}{2} \sum _{j=1}^J {{\mathbb {E}}}\Bigl [ {\bigl (\mathbb {1}_{\Omega _{\kappa ,j-1}} - \mathbb {1}_{\Omega _{\kappa ,j}}\bigr )} \Vert \nabla Z^{j-1}\Vert ^2\Bigr ] \nonumber \\&\qquad + \frac{1}{2} \sum _{j=1}^J {\mathbb E}\Bigl [\mathbb {1}_{\Omega _{\kappa ,j}} \bigl (\Vert \nabla [Z^j - Z^{j-1}]\Vert ^2 + \varepsilon k \Vert \nabla \Delta Z^j\Vert ^2\bigr )\Bigr ] \nonumber \\&\quad \le {\max \Bigl \{ \frac{C(1+\kappa ^2)}{\varepsilon ^2} {{{\mathcal {F}}}_1 \bigl (k, \varepsilon ; \sigma _0, \kappa _0, \gamma \bigr )},\frac{C(1+ \kappa ^2)}{\varepsilon ^{7+ 2{{\mathfrak {n}}}_{\texttt {CH}}}} \Bigl (\frac{C}{\varepsilon ^{\kappa _0}} \max \bigl \{ \frac{k^2}{\varepsilon ^4}, \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0},\varepsilon ^{2\gamma }\bigr \}\Bigr )^{\frac{1}{4}}\Bigr \}} \nonumber \\&\qquad + \varepsilon ^\gamma {\mathbb {E}}\Big [\max _{0\le j\le J} \sum _{i=1}^j \mathbb {1}_{\Omega _{\kappa ,i}}\bigl (g, -\Delta Z^{i}\bigr ) \Delta _i W\Big ]\,. \end{aligned}$$
(5.14)

To estimate the stochastic term we use \(\partial _{n}g = 0\) on \(\partial {{\mathcal {D}}}\) and proceed as follows,

$$\begin{aligned}&\varepsilon ^\gamma {\mathbb {E}}\Bigl [ \max _{0\le j \le J} \sum _{i=1}^j \mathbb {1}_{\Omega _{\kappa ,i}}\bigl (-\Delta g, Z^{i}\bigr ) \Delta _i W\Bigr ] \\&\quad = \varepsilon ^{\gamma } {\mathbb {E}}\Bigl [ \max _{0\le j\le J} \sum _{i=1}^j\Big ( \mathbb {1}_{\Omega _{\kappa ,i}} {\bigl (-\Delta g, Z^i-Z^{i-1}\bigr ) \Delta _i W} \\&\qquad + {\mathbb {1}_{\Omega _{\kappa ,i-1}} \bigl (\nabla g, \nabla Z^{i-1}\bigr ) \Delta _i W} {+ {\bigl ( \mathbb {1}_{\Omega _{\kappa ,i}} - \mathbb {1}_{\Omega _{\kappa ,i-1}}\bigr ) {\bigl (\nabla g, \nabla Z^{i-1}\bigr )} \Delta _i W}}\Big ) \Bigr ] \\&\quad \le \frac{\varepsilon ^\gamma }{2}\sum _{i=1}^J {{\mathbb {E}}}\Bigl [ \Vert Z^i-Z^{i-1}\Vert ^2 + \Vert \Delta g\Vert ^2\vert \Delta _j W\vert ^2 \Bigr ] \nonumber \\&\qquad + \varepsilon ^{\gamma }{\mathbb {E}}\Bigl [ \max _{0\le j\le J} \sum _{i=1}^j {\mathbb {1}_{\Omega _{\kappa ,i-1}} \bigl (\nabla g, \nabla Z^{i-1}\bigr ) \Delta _i W} \Bigr ] \\&\qquad + {\frac{1}{4} \sum _{i=1}^J {{\mathbb {E}}}\bigl [ \bigl ( \mathbb {1}_{\Omega _{\kappa ,i}} - \mathbb {1}_{\Omega _{\kappa ,i-1}}\bigr )^2 \Vert \nabla Z^{i-1}\Vert ^2\bigr ]} + C\varepsilon ^{2\gamma } k \sum _{i=1}^J {{\mathbb {E}}}\bigl [ \Vert \nabla g\Vert ^2 \bigr ] \ . \end{aligned}$$

The first term on the right-hand side may be bounded by Lemma 5.2, the third term is absorbed in the left-hand side of (5.14), and for the second term we use the discrete BDG-inequality (Lemma 3.3) and Lemma 3.7 to estimate

$$\begin{aligned}&\varepsilon ^{\gamma }{\mathbb {E}}\Bigl [ \max _{0\le j\le J} \sum _{i=1}^j {\mathbb {1}_{\Omega _{\kappa ,i-1}} \bigl (\nabla g, \nabla Z^{i-1}\bigr ) \Delta _i W} \Bigr ] \\&\quad \le C \varepsilon ^{\gamma }\Vert \nabla g\Vert _{{\mathbb {L}}^\infty } {\mathbb {E}}\Bigl [ k\sum _{i=1}^J \Vert \nabla Z^{i-1}\Vert ^2 \Bigr ]^{\frac{1}{2}} \le {\frac{C \varepsilon ^{\gamma }}{\varepsilon ^2} \Bigl (\frac{C}{\varepsilon ^{\kappa _0}} \max \bigl \{ \frac{k^2}{\varepsilon ^4}, \varepsilon ^{\gamma +\frac{\sigma _0+1}{3}}, \varepsilon ^{\sigma _0},\varepsilon ^{2\gamma }\bigr \}\Bigr )^{\frac{1}{4}}}. \end{aligned}$$

Hence, the statement of the lemma follows from (5.14) and the above estimates on noting that \((\mathbb {1}_{\Omega _{\kappa ,j}} - \mathbb {1}_{\Omega _{\kappa ,j-1}})^2 = \mathbb {1}_{\Omega _{\kappa ,j-1}} - \mathbb {1}_{\Omega _{\kappa ,j}} \ge 0\). \(\square \)

The \({\mathbb {L}}^\infty \)-estimate in the next theorem is a crucial ingredient to show convergence to the sharp-interface limit.

Theorem 5.5

Assume (C\(_2\)). For any \(2<p<3\), there exists \(C\equiv C(p) >0\) such that

$$\begin{aligned}&{{\mathbb {E}}}\Bigl [ {\max _{0\le j \le J}} \mathbb {1}_{{\Omega }_{\kappa ,j}}\Vert Z^j\Vert _{{\mathbb {L}}^\infty }^p\Bigr ] \\&\quad \le {C\varepsilon ^{-\frac{p}{2}} k^{\frac{2-p}{2}} \big ({\mathcal {F}}_2 (k, \varepsilon ; \sigma _0, \kappa _0, \gamma )\big )^{3-p} \big ({\mathcal {F}}_1 (k, \varepsilon ; \sigma _0, \kappa _0, \gamma )\big )^{\frac{p-2}{2}}\,.} \end{aligned}$$

Proof

We proceed analogically as in step 2. in the proof of Lemma 5.1. We use the Sobolev and Gagliardo–Nirenberg inequalities, apply Hölder inequality twice; then use Lemma 5.3, Lemma 5.2 (i.e., \({\mathbb {E}}\big [\varepsilon k\Vert \Delta Z^j\Vert ^2\big ] \le C\)) along with the triangle inequality in combination with Lemma 3.1 (i), Lemma 3.2 (iv) and get for \(2<p<3\) that

$$\begin{aligned}&{\mathbb {E}}\Big [{\max _{1\le j\le J}} \mathbb {1}_{{\Omega }_{\kappa ,j}} \Vert Z^j\Vert _{{\mathbb {L}}^\infty }^p\Big ] \\&\quad \le { C {\mathbb {E}}\Big [\max _{1\le j \le J}\mathbb {1}_{{\Omega }_{\kappa ,j}}\Vert \nabla Z^j\Vert _{{\mathbb {L}}^p}^p\Big ] \le C {\mathbb {E}}\Big [\max _{1\le j \le J}\mathbb {1}_{{\Omega }_{\kappa ,j}} \Vert \nabla Z^j\Vert ^2\Vert \Delta Z^j\Vert ^{p-2}\Big ] } \\&\quad \le {C {\mathbb {E}}\Big [\max _{1\le j \le J}\mathbb {1}_{{\Omega }_{\kappa ,j}}\Vert \nabla Z^j\Vert ^{\frac{2(3-p)}{3-p}}\Big ]^{3-p} {\mathbb {E}}\Big [\max _{1\le j \le J}\Vert \nabla Z^j\Vert ^{\frac{2(p-2)}{p-2}} \Vert \Delta Z^j\Vert \Big ]^{p-2}} \\&\quad \le {C \big ({\mathcal {F}}_2 (k, \varepsilon ; \sigma _0, \kappa _0, \gamma )\big )^{3-p} {\mathbb {E}}\Big [\max _{1\le j \le J}\Vert \nabla Z^j\Vert ^{4}\Big ]^{1/2} (\varepsilon k)^{-\frac{p-2}{2}} {\mathbb {E}}\Big [\varepsilon k\Vert \Delta Z^j\Vert ^2\Big ]^{\frac{p-2}{2}}} \\&\quad \le { C(\varepsilon k)^{\frac{2-p}{2}} \big ({\mathcal {F}}_2 (k, \varepsilon ; \sigma _0, \kappa _0, \gamma )\big )^{3-p} \big ({\mathcal {F}}_1 (k, \varepsilon ; \sigma _0, \kappa _0, \gamma )\big )^{\frac{p-2}{2}}} \\&\qquad {\varepsilon ^{-1} \Big (\max _{1\le j \le J}\varepsilon \Vert \nabla X_{\texttt {CH}}^j\Vert ^{2} + {\mathbb {E}}\Big [\varepsilon ^2\max _{1\le j \le J}\Vert \nabla X^j\Vert ^{4}\Big ]^{1/2}\Big ) } \\&\quad \le {C\varepsilon ^{-\frac{p}{2}} k^{\frac{2-p}{2}} \big ({\mathcal {F}}_2 (k, \varepsilon ; \sigma _0, \kappa _0, \gamma )\big )^{3-p} \big ({\mathcal {F}}_1 (k, \varepsilon ; \sigma _0, \kappa _0, \gamma )\big )^{\frac{p-2}{2}}}\,. \end{aligned}$$

\(\square \)

In order to establish convergence to zero (for \(\varepsilon \downarrow 0\)) of the right-hand side in the inequality of the theorem, we impose again a stronger assumption than (C\(_2\)):

(C\(_3\)):

Assume (C\(_2\)), and that \((\sigma _0, \kappa _0, \gamma )\), and k satisfy

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} {\Bigl [\varepsilon ^{-p} k^{2-p} \big ({\mathcal {F}}_2 (k, \varepsilon ; \sigma _0, \kappa _0, \gamma )\big )^{6-2p} \big ({\mathcal {F}}_1 (k, \varepsilon ; \sigma _0, \kappa _0, \gamma )\big )^{p-2}]^{\frac{1}{2}}} = 0\,. \end{aligned}$$
(5.15)

Remark 5.6

We discuss a strategy to identify admissible quadruples \((\sigma _0, \kappa _0, \gamma ,k)\) which meet assumption (C\(_3\)): for this purpose, we limit ourselves to a discussion of the leading term inside the maximum which defines \({{\mathcal {F}}}_2\) (see Lemma 5.3), and recall Remark 5.4.

  1. (1)

    To meet (5.15) instead of (5.10), we have to ensure that for some \(2<p<3\)

    $$\begin{aligned} \varepsilon ^{-\frac{p}{2}} k^{\frac{2-p}{2}} \Big ({k^{\frac{2-p}{p}}} \varepsilon ^{{\frac{1-3p}{p}}} \bigl \vert \ln (\varepsilon ^{1-p} )\bigr \vert ^{\frac{2}{p}}\Big )^{3-p}\Big ({{{\mathcal {F}}}_1(k,\varepsilon ; \sigma _0, \kappa _0, \gamma )}\Big )^{\frac{4-p}{2}} \rightarrow 0 \qquad (\varepsilon \downarrow 0) \end{aligned}$$

    and hence

    $$\begin{aligned} {\Bigl [ \Big ({{{\mathcal {F}}}_1(k,\varepsilon ; \sigma _0, \kappa _0, \gamma )}\Big )^{\frac{4-p}{2}} \varepsilon ^{{-\frac{p}{2}}}\varepsilon ^{ {\frac{(1-3p)(3-p)}{p}}}\bigl \vert \ln (\varepsilon ^{1-p} ) \bigr \vert ^{\frac{2(3-p)}{p}} \Bigr ]^{{\frac{2p}{(2-p)(6-p)}}}} = o(k)\, . \end{aligned}$$
  2. (2)

    We may now proceed as in (2) in Remark 5.4 to identify proper choices \(k = \varepsilon ^{{\alpha }}\) (\(\alpha >0\)) and \(p = 2+\delta \), for sufficiently small \(\delta >0\), that guarantee (5.15).

We are now ready to formulate the second main result of this paper, which is convergence in probability of the solution \(\{X^j\}_{j=0}^J\) of Scheme 3.1 to the solution of the deterministic Hele–Shaw/Mullins–Sekerka problem (5.1) for \(\varepsilon \downarrow 0\), provided that assumption (C\(_3\)) is valid, and (5.1) has a classical solution; cf. Theorem 5.7 below. The proof rests on

  1. a)

    the uniform bounds for \(\{\mathbb {1}_{{\Omega }_{\kappa ,j}}\Vert Z^j\Vert ^p_{{\mathbb {L}}^{\infty }}\}_{j=1}^J\) (see Theorem 5.5), and the property that \({\lim _{\varepsilon \downarrow 0}}\max _{1\le j\le J}{\mathbb {P}}[\Omega _{\kappa ,j}] = 1\) (in Lemma 5.1) for the sequence \(\{ \Omega _{\kappa ,j}\}_{j=1}^J \subset \Omega \), and

  2. b)

    a convergence result for \(\{ X^j_{\texttt {CH}}\}_{j=0}^J\) towards a smooth solution of the Hele–Shaw/Mullins–Sekerka problem in [17, Section 4].

For each \(\varepsilon \in (0,\varepsilon _0)\) we consider below the piecewise affine interpolant in time of the iterates \(\{ X^j\}_{j=0}^J\) of Scheme 3.1 via

$$\begin{aligned} X^{\varepsilon ,k}(t) := \frac{t-t_{j-1}}{k}X^{j} + \frac{t_{j}-t}{k}X^{j-1}\qquad \mathrm {for}\quad t_{j-1}\le t \le t_{j} \, . \end{aligned}$$
(5.16)

Let \(\Gamma _{00} \subset {{\mathcal {D}}}\) in (5.1e) be a smooth closed curve, and \((v_{\texttt {MS}}, \Gamma ^{\texttt {MS}})\) be a smooth solution of (5.1) starting from \(\Gamma _{00}\), where \(\Gamma ^{\texttt {MS}} := \bigcup _{0 \le t \le T} \{t\} \times \Gamma ^{\texttt {MS}}_t\). Let \({\mathrm{d}}(t,{x})\) denote the signed distance function to \(\Gamma ^{\texttt {MS}}_t\) such that \({\mathrm{d}}(t,{x}) < 0\) in \({{\mathcal {I}}}^{\texttt {MS}}_t\), the inside of \(\Gamma ^{\texttt {MS}}_t\), and \({\mathrm{d}}(t, { x})>0\) on \({{\mathcal {O}}}^{\texttt {MS}}_t := {{\mathcal {D}}} \setminus (\Gamma ^{\texttt {MS}}_t \cap {{\mathcal {I}}}^{\texttt {MS}}_t)\), the outside of \(\Gamma ^{\texttt {MS}}_t\). We also define the inside \({\mathcal I}^{\texttt {MS}}\) and the outside \({{\mathcal {O}}}^{\texttt {MS}}\),

$$\begin{aligned}{{\mathcal {I}}}^{\texttt {MS}} := \bigl \{ (t, {x}) \in \overline{{{\mathcal {D}}}_T}:\, {\mathrm{d}}(t,{x}) < 0\bigr \}\,, \qquad {{\mathcal {O}}}^{\texttt {MS}} := \bigl \{ (t, {x}) \in \overline{{\mathcal D}_T}:\, {\mathrm{d}}(t,{x}) >0\bigr \}\,. \end{aligned}$$

For the numerical solution \(X^{\varepsilon ,k} \equiv X^{\varepsilon ,k}(t,x)\), we denote the zero level set at time t by \(\Gamma ^{\varepsilon ,k}_t\), that is,

$$\begin{aligned}\Gamma _t^{\varepsilon ,k} := \bigl \{ x \in {{\mathcal {D}}}:\, X^{\varepsilon ,k}(t,x) = 0\bigr \} \qquad (0 \le t \le T)\, . \end{aligned}$$

We summarize the assumptions needed below concerning the Mullins–Sekerka problem (5.1).

(D):

Let \({{\mathcal {D}}} \subset {{\mathbb {R}}}^2\) be a smooth domain. There exists a classical solution \((v_{\texttt {MS}},\Gamma ^{\texttt {MS}})\) of (5.1) evolving from \(\Gamma _{00} \subset {{\mathcal {D}}}\), such that \(\Gamma ^{\texttt {MS}}_t \subset {{\mathcal {D}}}\) for all \(t \in [0,T]\).

By [1, Theorem 5.1], assumption (D) establishes the existence of a family of smooth solutions \(\{ u_0^\varepsilon \}_{0 \le \varepsilon \le 1}\) which are uniformly bounded in \(\varepsilon \) and (tx), such that if \(u^\varepsilon _{\texttt {CH}}\) is the corresponding solution of (1.1) with \(g \equiv 0\), then

$$\begin{aligned} \mathrm{(i)}&\lim _{\varepsilon \downarrow 0} u^{\varepsilon }_{\texttt {CH}}(t,x) = \left\{ \begin{array}{l} +1 \quad \text{ if } (t,x) \in {{\mathcal {O}}}^{\texttt {MS}}\,, \\ -1 \quad \text{ if } (t,x) \in {{\mathcal {I}}}^{\texttt {MS}}\,, \end{array}\right. \qquad \text{ uniformly } \text{ on } \text{ compact } \text{ subsets } \text{ of } {{\mathcal {D}}}_T\, , \\ \mathrm{(ii)}&\lim _{\varepsilon \downarrow 0} \bigl ( \frac{1}{\varepsilon }f(u^\varepsilon _{\texttt {CH}}) - \varepsilon \Delta u^\varepsilon _{\texttt {CH}}\bigr )(t,x) = v^{\texttt {MS}}(t,x) \quad \quad \ \text{ uniformly } \text{ on } {{\mathcal {D}}}_T\, . \end{aligned}$$

The following theorem establishes uniform convergence of iterates \(\{ X^j\}_{j=0}^J\) from Scheme 3.1 in probability on the sets \({{\mathcal {I}}}^{\texttt {MS}}\), \({{\mathcal {O}}}^{\texttt {MS}}\).

Theorem 5.7

Assume (C\(_3\)) and (D). Let \(\{ X^\varepsilon \}_{0 \le \varepsilon \le \varepsilon _0}\) in (5.16) be obtained via Scheme 3.1. Then

$$\begin{aligned} \mathrm{(i)}&\lim _{\varepsilon \downarrow 0}\, {\mathbb {P}}\left[ \bigl \{ \Vert X^{\varepsilon ,k} - 1\Vert _{C({{\mathcal {A}}})}> \alpha \quad \mathrm {for\,\,all\,\,} {{\mathcal {A}}} \Subset {\mathcal {O}}^{\texttt {MS}} \bigr \} \right] = 0 \qquad \forall \, \alpha> 0\, , \\ \mathrm{(ii)}&\lim _{\varepsilon \downarrow 0}\, {\mathbb {P}}\left[ \bigl \{ \Vert X^{\varepsilon ,k} + 1\Vert _{C({{\mathcal {A}}})}> \alpha \quad \mathrm {for\,\,all\,\,} {{\mathcal {A}}} \Subset {\mathcal {I}}^{\texttt {MS}} \bigr \}\right] = 0 \qquad \, \forall \, \alpha > 0\, . \end{aligned}$$

Proof

We decompose \(\overline{{{\mathcal {D}}}_T} \setminus \Gamma = {\mathcal I}^{\texttt {MS}} \cup {{\mathcal {O}}}^{\texttt {MS}}\), and consider related errors \(X^{\varepsilon ,k}_{\texttt {CH}} + 1\), \(X^{\varepsilon ,k}_{\texttt {CH}} - 1\) and \(X^{\varepsilon ,k} - X^{\varepsilon ,k}_{\texttt {CH}}\).

1. By [17, Theorem 4.2]Footnote 1, the piecewise affine interpolant \(X^{\varepsilon ,k}\) of \(\{X^j_{\texttt {CH}}\}_{j=0}^J\) satisfies

$$\begin{aligned} \mathrm{i')}&X^{\varepsilon ,k}_{\texttt {CH}} \rightarrow +1 \quad \text{ uniformly } \text{ on } \text{ compact } \text{ subsets } \text{ of } {{\mathcal {O}}}^{\texttt {MS}} \qquad (\varepsilon \downarrow 0)\, , \\ \mathrm{ii')}&X^{\varepsilon ,k}_{\texttt {CH}} \rightarrow -1 \quad \text{ uniformly } \text{ on } \text{ compact } \text{ subsets } \text{ of } {{\mathcal {I}}}^{\texttt {MS}} \qquad \, (\varepsilon \downarrow 0)\, . \end{aligned}$$

2. Since \(\Omega _{\kappa ,J} \subset \Omega _{\kappa ,j}\) for \(1 \le j\le J\), Theorem 5.5 and (C\(_3\)) imply (\(2<p < 3\))

$$\begin{aligned} {{\mathbb {E}}}\bigl [ {\max _{0\le j \le J}} \mathbb {1}_{\Omega _{\kappa ,J}}\Vert Z^j\Vert _{{\mathbb {L}}^\infty }^p\bigr ] \rightarrow 0 \qquad (\varepsilon \downarrow 0)\, . \end{aligned}$$

The discussion around (5.8) shows \(\lim _{\varepsilon \downarrow 0} {{\mathbb {P}}}[\Omega \setminus \Omega _{\kappa ,J}] = 0\). Let \(\alpha > 0\). By Markov’s inequality

$$\begin{aligned}&\displaystyle {\mathbb {P}}\big [\bigl \{\max _{0\le j\le J}\Vert Z^j \Vert _{{\mathbb {L}}^\infty }^p \ge \alpha \bigr \}\big ] \\&\quad \le \displaystyle {\mathbb {P}}\big [\big \{\max _{0\le j\le J}\Vert Z^j \Vert _{{\mathbb {L}}^\infty }^p \ge \alpha \big \}\cap \Omega _{\kappa ,J} \big ] + {\mathbb {P}}\big [\Omega \setminus \Omega _{\kappa ,J} \big ] \\&\quad \le \frac{1}{\alpha } {{\mathbb {E}}\Bigl [\displaystyle \max _{0\le j \le J} \mathbb {1}_{{\Omega }_{\kappa ,J}} \Vert Z^j\Vert _{{\mathbb {L}}^\infty }^p\Bigr ]} + {\mathbb {P}}\big [\Omega \setminus \Omega _{\kappa ,J} \big ] \rightarrow 0 \qquad (\varepsilon \downarrow 0)\, . \end{aligned}$$

The statement then follows by the triangle inequality and part 1. \(\square \)

A consequence of Theorem 5.7 is the convergence in probability of the zero level set \(\{\Gamma ^{\varepsilon ,k}_t;\, t \ge 0\}\) to the interface \(\Gamma _t^{\texttt {MS}}\) of the Mullins–Sekerka/Hele–Shaw problem (5.1).

Corollary 5.8

Assume (C\(_3\)) and (D). Let \(\{ X^{\varepsilon ,k}\}_{0 \le \varepsilon \le \varepsilon _0}\) in (5.16) be obtained via Scheme 3.1. Then

$$\begin{aligned} \lim _{\varepsilon \downarrow 0}{\mathbb {P}}\bigl [ \bigl \{\sup _{(t,x)\in [0,T]\times \Gamma ^{\varepsilon ,k}_t} \mathrm {dist}(x,\Gamma _t^{\texttt {MS}})> \alpha \bigr \}\bigr ]=0 \qquad \forall \, \alpha > 0\,. \end{aligned}$$

Proof

We adapt arguments from the proof of [17, Theorem 4.3].

1. For any \(\eta \in (0,1)\) we construct an open tubular neighborhood

$$\begin{aligned}{\mathcal {N}}_\eta := \bigl \{ (t,x) \in {\overline{{{\mathcal {D}}}_T}}:\, {|{\mathrm{d}}(t,x)|} < \eta \bigr \}\end{aligned}$$

of width \(2\eta \) of the interface \(\Gamma ^{\texttt {MS}}\) and define compact subsets

$$\begin{aligned}{\mathcal {A}}_{{\mathcal {I}}} = {\mathcal {I}}^{\texttt {MS}}\setminus {\mathcal {N}}_\eta \,, \qquad {\mathcal {A}}_{{\mathcal {O}}} = {\mathcal {O}}^{\texttt {MS}}\setminus {\mathcal {N}}_\eta \, .\end{aligned}$$

Thanks to Theorem 5.7 there exists \(\varepsilon _0 \equiv \varepsilon _0(\eta ) >0\) such that for all \(\varepsilon \in (0,\varepsilon _0)\) it holds that

$$\begin{aligned} \begin{aligned}&{\mathbb {P}}\big [ \{|X^{\varepsilon , {k}}(t,x) - 1| \le \eta \,\,\mathrm {for}\,\, (t,x)\in {\mathcal {A}}_{{\mathcal {O}}}\}\big ] \ge 1-\eta \,,\\&{\mathbb {P}}\big [\{ |X^{\varepsilon , {k}}(t,x) + 1| \le \eta \,\,\mathrm {for}\,\, (t,x)\in {\mathcal {A}}_{{\mathcal {I}}}\} \big ] \ge 1-\eta \,. \end{aligned} \end{aligned}$$
(5.17)

In addition, for any \(t \in [0,T]\), and \(x \in \Gamma ^{\varepsilon ,k}_t\), since \(X^\varepsilon (t,x) = 0\), we have

$$\begin{aligned} \bigl \vert X^{\varepsilon , {k}}(t,x) - 1\vert = \bigl \vert X^{\varepsilon ,{k}}(t,x) + 1\vert = 1\, . \end{aligned}$$
(5.18)

2. We observe that for any \(\eta \in (0,1)\)

$$\begin{aligned} {{\mathbb {P}}}\bigl [ \{ (t, \Gamma ^{\varepsilon ,k}_t);\, t \in [0,T] \subset {{\mathcal {N}}}_\eta \}\bigr ]= & {} {{\mathbb {P}}}\bigl [ \bigl \{\{ (t,x):\, t \in [0,T], \ X^{\varepsilon , {k}}(t,x) =0\} \subset {\mathcal N}_\eta \bigr \}\bigr ] \nonumber \\= & {} 1- {{\mathbb {P}}}\bigl [ \bigl \{ \exists \, (t,x) \in \overline{{{\mathcal {D}}}_T} \setminus {{\mathcal {N}}}_\eta :\, X^{\varepsilon , {k}}(t,x) = 0\bigr \}\bigr ] \nonumber \\:= & {} 1- {{\mathbb {P}}}\bigl [ {\widetilde{\Omega }}_3\bigr ] \, . \end{aligned}$$
(5.19)

On noting (5.18) we deduce that \({\mathbb {P}}[{\widetilde{\Omega }}_3] \le {\mathbb {P}}[\Omega _3]\) where

$$\begin{aligned} \Omega _3 := \bigl \{ \exists \, (t,x) \in {{\mathcal {A}}}_{{\mathcal {O}}} :\, \vert X^{\varepsilon , {k}}(t,x) - 1\bigr \vert> \eta \ \vee \ \exists \, (t,x) \in {{\mathcal {A}}}_{{\mathcal {I}}} :\, \vert X^{\varepsilon , {k}}(t,x) + 1\bigr \vert > \eta \bigr \}\, .\end{aligned}$$

By (5.17), it holds for \(\varepsilon \in (0,\varepsilon _0)\) that

$$\begin{aligned} 1-{{\mathbb {P}}}[{\widetilde{\Omega }}_3]\ge {{\mathbb {P}}}[\Omega \setminus \Omega _3]= & {} {{\mathbb {P}}}\bigl [\bigl \{ \forall \, (t,x) \in {{\mathcal {A}}}_{{\mathcal {O}}} :\, \vert X^{\varepsilon , {k}}(t,x) - 1\bigr \vert \le \eta \\&\quad \ \wedge \ \forall \, (t,x) \in {{\mathcal {A}}}_{{\mathcal {I}}} :\, \vert X^{\varepsilon , {k}}(t,x) + 1\bigr \vert \le \eta \bigr \}\bigr ] \ge 1- {2}\eta \,. \end{aligned}$$

Inserting this estimate into (5.19) yields for all \(\varepsilon \in (0,\varepsilon _0)\)

$$\begin{aligned} {\mathbb {P}}\bigl [ \bigl \{\sup _{(t,x)\in [0,T]\times \Gamma ^{\varepsilon ,k}_t} \mathrm {dist}(x,\Gamma _t^{\texttt {MS}}) \le \alpha \bigr \}\bigr ]\ge & {} {\mathbb {P}}\big [ \{(t,\Gamma ^{\varepsilon ,k}_t),\,\,t\in [0,T]\}\subset {\mathcal {N}}_{\eta } \big ] \\\ge & {} 1-{2}\eta , \end{aligned}$$

which holds for any \(\alpha \ge \eta \). The desired result then follows on noting that \(\eta \) can be chosen arbitrarily small once we take \(\lim _{\varepsilon \downarrow 0}\) in the above inequality. \(\square \)

Remark 5.9

The numerical experiments in Sect. 6 suggest that the conditions on \(\gamma \) and k which are required for Theorem 5.7 to hold are too pessimistic; in particular, they indicate convergence to the deterministic Mullins–Sekerka/Hele–Shaw problem already for \(\gamma =1\), \(k ={\mathcal {O}}(\varepsilon )\).

6 Computational experiments

The computational experiments are meant to support and complement the theoretical results in the earlier sections:

  • Convergence to the deterministic sharp-interface limit (5.1) for the space–time white noise in Sect. 6.3. We study pathwise convergence of the white noise-driven simulations to the deterministic sharp interface limit, which is a scenario beyond the one for regular trace-class noise where Theorem 5.7 and Corollary 5.8 establish convergence in probability.

  • Pathwise convergence to the stochastic sharp interface limit (6.4) (introduced in Sect. 6.2 below) for spatially smooth noise in Sect. 6.4, where we also examine the sensitivity of numerical simulations with respect to the mesh refinement.

6.1 Implementation and adaptive mesh refinement

For the computations below we employ a mass-lumped variant of Scheme 4.1

$$\begin{aligned} \begin{aligned}&(X^j_h-X^{j-1}_h,\varphi _h)_h+k(\nabla w^{j}_h,\nabla \varphi _h)=\varepsilon ^{\gamma }\bigl (g\Delta _j W^h, \varphi _h \bigr )_h \;\;\;\; \, \, \quad \, \forall \, \varphi _h \in {\mathbb {V}}_h\,,\\&\varepsilon (\nabla X^j_h,\nabla \psi _h)+\frac{1}{\varepsilon } \bigl (f(X^j_h),\psi _h \bigr )_h=(w^j_h,\psi _h)_h \quad \qquad \qquad \quad \quad \forall \, \psi _h \in {\mathbb {V}}_h\, ,\\&X^0_h=u_0^{\varepsilon ,h} \in {\mathbb {V}}_h\, , \end{aligned} \end{aligned}$$
(6.1)

where the standard \({\mathbb {L}}^2\)-inner product in Scheme 4.1 is replaced by the discrete (mass-lumped) inner product \((v,w)_h = \int _{{\mathcal {D}}} {\mathcal {I}}^h (v(x)w(x))\mathrm {d}x\) for \(v,w\in {\mathbb {V}}_h\), where \({\mathcal {I}}^h:C(\overline{{\mathcal {D}}})\rightarrow {\mathbb {V}}_h\) is the standard interpolation operator. In all experiments we take \({\mathcal {D}} = (0,1)^2\subset {\mathbb {R}}^2\) and g is taken to be a constant. We note that an implicit Euler finite element scheme similar to Scheme 6.1 has been used previously in [19], which also performs simulations to study long time behavior of the system for different strengths of the (space–time white) noise with fixed \(\varepsilon \).

For a given initial interface \(\Gamma _{00}\) we construct an \(\varepsilon \)-dependent family of initial conditions \(\{u^{\varepsilon }_0\}_{\varepsilon >0}\) as \(u^{\varepsilon }_0(x) =\tanh (\frac{\mathrm {d}_0(x)}{\sqrt{2}\varepsilon })\), where \(\mathrm {d}_0\) is the signed distance function to \(\Gamma _{00}\). Consequently, \(\{u^{\varepsilon }_0\}_{\varepsilon >0}\) have bounded energy and contain a diffuse layer of thickness proportional to \(\varepsilon \) along \(\Gamma _{00}\), and \(u^\varepsilon _0(x) \approx -1\), \(u^\varepsilon _0(x) \approx 1\) in the interior, exterior of \(\Gamma _{00}\), respectively. The construction ensures that \(\int _{\mathcal {D}} u^\varepsilon _0\, \mathrm {d}x \rightarrow m_0\) for \(\varepsilon \rightarrow 0\), where \(m_0\) is the difference between the respective areas of the exterior and interior of \(\Gamma _{00}\) in \({\mathcal {D}}\). For convenience we set \(u_0^{\varepsilon , h} = {\mathcal {I}}^h u_0^{\varepsilon }\).

The discrete increments \(\Delta _j W^h = W^h(t_j) - W^h(t_{j-1})\) in (6.1) are \({\mathbb {V}}_h\)-valued random variables which approximate the increments of a \({\mathcal Q}\)-Wiener process on a probability space \((\Omega , {\mathcal {F}},{\mathbb {P}})\) which is given by

$$\begin{aligned} W(t,x) = \sum _{i=1}^{\infty } \lambda _i e_i(x)\beta _i(t)\, , \end{aligned}$$

where \(\{e_i\}_{i\in {\mathbb {N}}}\) is an orthonormal basis in \({\mathbb {L}}^2({\mathcal {D}})\), \(\{\beta _i\}_{i\in {\mathbb {N}}}\) are independent real-valued Brownian motion, and \(\{\lambda _i\}_{i\in {\mathbb {N}}}\) are real-valued coefficients such that \({\mathcal {Q}}e_i = \lambda _i^2e_i\), \(i\in {\mathbb {N}}\). In order to preserve mass the noise is required to satisfy \({\mathbb {P}}\)-a.s. \(\int _{\mathcal {D}} W(t,x) \,{\mathrm{d}}x = 0\), \(t\in [0,T]\).

In the experiments below we consider two types of Wiener processes: a smooth (finite dimensional) noise and a \({\mathbb {L}}^2_0\)-cylindrical Wiener process (space–time white noise). The smooth noise is given by

$$\begin{aligned} \displaystyle \Delta _j {\widehat{W}}(t,x) = \frac{1}{2}\sum _{k,\ell =1}^{64}\cos (2\pi kx_1)\cos (2\pi \ell x_2)\Delta _j \beta _{k\ell }\qquad x=(x_1,x_2)\in [0,1]^2\,, \end{aligned}$$

where \(\Delta _j \beta _{k\ell } = \beta _{k\ell }(t_j)-\beta _{k\ell }(t_{j-1})\) are independent scalar-valued Brownian increments. The discrete approximation of the smooth noise is then constructed as

$$\begin{aligned} \displaystyle \Delta _j W^h(x) = \sum _{\ell =1}^{L} \Delta _j {\widehat{W}}(x_\ell )\phi _\ell (x), \end{aligned}$$
(6.2)

where \(\phi _\ell (x_m) = \delta _{\ell m}\), \(\ell =1,\dots , L\) are the (standard) nodal basis function of \({\mathbb {V}}_h\), i.e., \({\mathbb {V}}_h = \mathrm {span}\{\phi _\ell , \, \ell =1, \dots , L\}\). The space–time white noise (\({\mathcal {Q}} = I\)) is approximated as (cf. [5])

$$\begin{aligned} \Delta _j {\widetilde{W}}^h(x) = \sum _{\ell =1}^{L} \frac{\phi _\ell (x)}{\sqrt{\frac{1}{3}|\mathrm {supp} \, \phi _\ell |}} \Delta _{j} {\beta }_\ell \qquad \forall \, x\in \overline{{\mathcal {D}}}\subset {\mathbb {R}}^2\,. \end{aligned}$$

In order to preserve the zero mean value property of the noise we normalize the increments as

$$\begin{aligned} \Delta _j W^h = \Delta _j {\widetilde{W}}^h - \frac{1}{|{\mathcal {D}}|}\int _{\mathcal {D}} \Delta _j {\widetilde{W}}^h\, {\mathrm{d}}x . \end{aligned}$$
(6.3)

The Wiener process is simulated using a standard Monte–Carlo technique, i.e., for \(\omega _m \in \Omega \), \(m=1, \dots , M\), we approximate the Brownian increments in (6.2),(6.3) as \(\Delta _j \beta _\ell (\omega _m) \approx \sqrt{k} {\mathcal {N}}_\ell ^j(0,1)(\omega _m)\), where \({\mathcal {N}}_\ell ^j(0,1)(\omega _m)\) is a realization of the Gaussian random number generator at time level \(t_j\). The discrete nonlinear systems related to (realizations of) the scheme (6.1) are solved using the Newton method with a multigrid linear solver.

To increase the efficiency of the computations we employ a pathwise mesh refinement algorithm. For a realization \(X_{h,m}^{j}:=X_h^{j}(\omega _m)\), \(\omega _m\in \Omega \) of the \({\mathbb {V}}_h\)-valued random variable \(X_h^{j}\) we define \(\eta _{grad}(x) = \max \{|\nabla X_{h,m}^{j}(x)|, |\nabla X_{h,m}^{j-1}(x)|\}\) and refine the finite element mesh in such a way that \(h(x) = h_{\mathrm {min}}\) if \(\varepsilon \eta _{grad}(x) \ge 10^{-2}\) and \(h(x) \approx h_{\mathrm {max}}\) if \(\varepsilon \eta _{grad}(x) \le 10^{-3}\); the mesh produced at time level j is then used for the computation of \(X_{h,m}^{j+1}\). The adaptive algorithm produces meshes with mesh size \(h = h_{\mathrm {min}}\) along the interfacial area and \(h \approx h_{\mathrm {max}}\) in the bulk where \(u \approx \pm 1\), see Fig. 3 for a typical adapted mesh. In our computations we choose \(h_{\mathrm {max}} = 2^{-6}\) and \(h_{\mathrm {min}} = \frac{\pi }{4}\varepsilon \), i.e. \(h_{\mathrm {min}} = h_{\mathrm {max}}\) for \(\varepsilon \ge 1/(16\pi )\) and \(h_{\mathrm {min}}\) scales linearly for smaller values of \(\varepsilon \).

In the presented simulations, mesh refinement did not appear to significantly influence the asymptotic behavior of the numerical solution. This is supported by comparison with additional numerical simulation on uniform meshes. The observed robustness of numerical simulations with respect to the mesh refinement can be explained by the fact that the asymptotics are determined by pathwise properties of the solution on a large probability set. This conjecture is supported by the convergence in probability in Theorem 5.7 and Corollary 5.8. In the present setup the (possible) bias due to the pathwise adaptive-mesh refinement did not have significant impact on the results. In general, the use of adaptive algorithms with rigorous control of weak errors may be a preferable approach, cf. [25].

6.2 Stochastic Mullins–Sekerka problem and its discretization

We consider the following stochastic modification of the Mullins–Sekerka problem (5.1)

$$\begin{aligned} - \Delta \, v\, \mathrm {d}t&= g\, \mathrm {d}W&\qquad \text{ in } \ {\mathcal {D}} \setminus \Gamma _t\,, \end{aligned}$$
(6.4a)
$$\begin{aligned} \left[ \partial _{{n_\Gamma }} v \right] _{\Gamma _t}&= - 2\,{\mathcal {V}}&\qquad \text{ on } \ \Gamma _t\,, \end{aligned}$$
(6.4b)
$$\begin{aligned} v&= \alpha \,\varkappa&\qquad \text{ on } \ \Gamma _t\,, \end{aligned}$$
(6.4c)
$$\begin{aligned} \partial _{{n}} v&= 0&\qquad \text{ on } \partial {\mathcal {D}}\,, \end{aligned}$$
(6.4d)
$$\begin{aligned} \Gamma _0&= \Gamma _{00} \,. \end{aligned}$$
(6.4e)

We note that the only difference between (5.1) and (6.4) is in the equations (5.1a), (6.4a), respectively. Alternatively equation (6.4a) can be stated in an integral form as

$$\begin{aligned} -\int _0^t \Delta v\, {\mathrm{d}}s = g\int _0^t \mathrm {d}W \qquad \mathrm {in}\quad {\mathcal {D}}\setminus \Gamma _t. \end{aligned}$$

For the approximation of the stochastic Mullins–Sekerka problem (6.4), we adapt the unfitted finite element approximation for the deterministic problem (5.1) from [6]. In particular, let \(\Gamma ^{j-1}\) be a polygonal approximation of the interface \(\Gamma \) at time \(t_{j-1}\), parameterized by \({Y}^{j-1}_h \in [{\mathbb {V}}_h(I)]^2\), where \(I = {{\mathbb {R}}} /{{\mathbb {Z}}}\) is the periodic unit interval, and where \({\mathbb {V}}_h(I)\) is the space of continuous piecewise linear finite elements on I with uniform mesh size h. Let \(\pi ^h:C(I) \rightarrow {\mathbb {V}}_h(I)\) be the standard nodal interpolation operator, and let \(\langle \cdot ,\cdot \rangle \) denote the \(L^2\)–inner product on I, with \(\langle \cdot ,\cdot \rangle _h\) the corresponding mass-lumped inner product. Then we find \(v_h^{j} \in {\mathbb {V}}_h\), \({Y}^{j}_h \in [{\mathbb {V}}_h(I)]^2\) and \(\kappa ^{j}_h \in {\mathbb {V}}_h(I)\) such that

$$\begin{aligned}&k\,(\nabla \,v^{j}_h, \nabla \,\varphi _h) - 2\, \left\langle \pi ^h\left[ {{Y}^{j}_h-{Y}^{j-1}_h} \,.\,{\nu }^{j-1}_h \right] , \varphi _h \circ {Y}^{j-1}_h\,|[{Y}^{j-1}_h]_\rho | \right\rangle = \bigl (g\Delta _j W^h, \varphi _h \bigr )_h \nonumber \\& \qquad \forall \ \varphi _h \in {\mathbb {V}}_h \,, \end{aligned}$$
(6.5a)
$$\begin{aligned}&\langle v^{j}_h, \chi _h \,|[{Y}^{j-1}_h]_\rho | \rangle - \alpha \,\langle \kappa ^{j}_h, \chi _h \,|[{Y}^{j-1}_h]_\rho | \rangle _h = 0 \qquad \forall \ \chi _h \in {\mathbb {V}}_h(I)\,, \end{aligned}$$
(6.5b)
$$\begin{aligned}&\langle \kappa ^{j}_h\,{\nu }^{j-1}_h, {\eta }_h \,|[{Y}^{j-1}_h]_\rho | \rangle _h + \langle [{Y}^{j}_h]_\rho , [{\eta }_h]_\rho \,|[{Y}^{j-1}_h]_\rho |^{-1} \rangle = 0 \qquad \forall \ {\eta }_h \in [{\mathbb {V}}_h(I)]^2\,. \end{aligned}$$
(6.5c)

In the above, \(\rho \) denotes the parameterization variable, so that \(|[{Y}^{j-1}]_\rho |\) is the length element on \(\Gamma ^{j-1}\) and \({\nu }^{j-1}_h \in [{\mathbb {V}}_h(I)]^2\) is a nodal discrete normal vector, see [6] for the precise definitions.

6.3 Convergence to the deterministic sharp-interface limit

6.3.1 One circle

We set \(\gamma =1\), \(g = 8\pi \) and consider the discrete space–time white noise (6.3). We note that the considered space–time white noise does not satisfy the smoothness assumptions required for the theoretical part of the paper (i.e., \(\gamma > 1\) and \(\mathrm {tr}(\Delta {\mathcal {Q}}) < \infty \)), however the numerical results indicate that for \(\varepsilon \downarrow 0\) the computed evolutions still converge to the deterministic Mullins–Sekerka problem (5.1).

The numerical studies below are performed using the scheme (6.1) with adaptive mesh refinement. The time-step size for \(\varepsilon = 2^{-i}/(64\pi )\), \(i=0, \dots ,4\) was \(k_i=2^{-i}10^{-5}\). The motivation of the different choice of the time-step is to eliminate possible effects of numerical damping and to ensure the convergence of the Newton solver for smaller values of \(\varepsilon \).

For each \(\varepsilon \) we use the initial condition \(u^{\varepsilon ,h}_0\) that approximates a circle with radius \(R=0.2\). Since circles are stationary solutions of the deterministic Mullins–Sekerka problem, the convergence of the numerical solution for the stochastic Cahn–Hilliard equation to the solution of the Mullins–Sekerka problem for \(\varepsilon \downarrow 0\) can be determined by measuring the deviations of the zero level-set of the solution \(X_h^j\), \(j=1,\dots , J\) from the circle with radius \(R=0.2\) for a sufficiently large computational time. We note that the zero level-set of the initial condition \(u^{\varepsilon ,h}_0\) above, exactly approximates the corresponding stationary solution of the Mullins–Sekerka problem, but it is not a stationary solution of the corresponding (discrete) deterministic Cahn–Hilliard equation, i.e., of (6.1) with \(g\equiv 0\). In order to obtain the optimal phasefield profile across the interfacial region, we let \(u^{\varepsilon ,h}_0\) relax towards the discrete stationary state by computing with (6.1) for \(g\equiv 0\) for a short time and then use that discrete solution as the actual initial condition for the subsequent simulations.

The results in Fig. 1 indicate that for decreasing \(\varepsilon \) the evolution of the zero level set of the numerical solution approaches the solution of the deterministic Mullins–Sekerka model, which is represented by the stationary circle with radius 0.2. We observe that the deviations of the interface from the circle are decreasing for smaller \(\varepsilon \).

Fig. 1
figure 1

Deviation of the interface along the x-axes from the circle for \(\varepsilon = 2^{-i}/(64\pi )\), \(i=0, \dots ,4\)

Fig. 2
figure 2

Numerical solution for \(\varepsilon =1/(512\pi )\) at time \(t=0,0.007,0.008\)

6.3.2 Two circles

In this experiment we consider the same setup as in the previous one with an initial condition which consists of two circles with radii \(R_1=0.15\) and \(R_2=0.1\), respectively. The evolution of the solution is more complex than in the previous experiment as the interface undergoes a topological change. To minimize the Ginzburg–Landau energy, the left (larger) circle grows, the right (smaller) circle shrinks and the resulting steady state is a single circle with mass equal to the mass of the two initial circles; see Fig. 2 for an example of a deterministic evolution with \(\varepsilon =1/(512\pi )\). In Fig. 3 we display the graph of the evolution of the position of the x-coordinate of rightmost point of the interface along the x-axis (i.e., we consider the rightmost point on the right (smaller) circle and after the right circle disappears we track the rightmost point of the left circle) for the deterministic Cahn–Hilliard equation as well as for typical realizations of the stochastic Cahn–Hilliard equation for decreasing values of \(\varepsilon \), and of the deterministic Mullins–Sekerka problem. Here the evolutions for the Mullins–Sekerka problem were computed with the scheme (6.5) in the absence of noise. We observe that the solution of the stochastic Cahn–Hilliard equation with the scaled space–time white noise (6.3) converges to the solution of the deterministic Mullins–Sekerka problem for decreasing values of the interfacial width parameter. In addition, the differences between the the stochastic and the deterministic evolutions of the Cahn–Hilliard equation diminish for decreasing values of \(\varepsilon \).

Fig. 3
figure 3

(left) Position of the rightmost point of the interface for the stochastic and the deterministic Cahn–Hilliard equations with \(\varepsilon = 2^{-i}/(64\pi )\), \(i=0, \dots ,4\), \(\gamma =1\) and the deterministic Mullins–Sekerka problem; the values are shifted by \(-0.5\). (right) Zoom on the adapted mesh around the smaller circle for \(\varepsilon =1/(512\pi )\) at \(t=0.007\)

6.4 Comparison with the stochastic Mullins–Sekerka model

We use the numerical scheme (6.1) to study the case of non-vanishing noise, i.e., \(\gamma =0\), with the discrete approximation of the smooth noise (6.2). The noise is symmetric across the center of the domain in order to facilitate an easier comparison with the Mullins–Sekerka problem. The computations below are pathwise, i.e., in the graphs below we display results computed for a single realization of the Wiener process. If not mentioned otherwise we use the time-step size \(k=10^{-5}\).

The initial condition is taken to be the \(\varepsilon \)-dependent approximation of a circle with radius \(R=0.2\) as in Sect. 6.3.1. In the computations, as before, we first let the initial condition relax to a stationary state and then use the stabilized profile \(X^{0}_{h} := X^{j_{s}}_{h}\) as an initial condition for the computation. The zero level-set of the stationary solution \(X^{j_s}_{h}\) is a circle with perturbed radius \(R=0.2+{\mathcal {O}}(\varepsilon )\), where in general the perturbation \({\mathcal {O}}(\varepsilon )\) also depends on the finite element mesh. To compensate for the effect of the perturbation in the initial condition for larger values of \(\varepsilon \) we represent the interface by a level set \(\Gamma _{u_{\Gamma }}^j := \{x\in {\mathcal {D}};\, X^j_h(x) = u_{\Gamma }\}\) (i.e., \(\Gamma _{0}^j\) is the zero level set of the discrete solution at time level \(t_j\)) where the values \(u_{\Gamma } = X^{j_{s}}(0.2,0)\), i.e., it is the ”compensated” level-set for which the stationary profile \(\Gamma _{u_{\Gamma }}^{j_{s}}\) coincides with the circle with radius \(R=0.2\). The usual value for the ”compensated” level-set was \(u_{\Gamma } \approx 0.27\) in the computations below.

We observe that in order to properly resolve the spatial variations of the noise it is necessary to use a mesh size smaller or equal to \(h_{\mathrm {max}} = 2^{-7}\) for the discretization of the Cahn–Hilliard equation. The computations for the Mullins–Sekerka problem, using the scheme (6.5), were more sensitive to the mesh size, and an accurate resolution for the considered noise required a mesh size \(h_{\mathrm {max}} = 2^{-8}\), cf. Fig. 4 which includes the results for \(h_{\mathrm {max}} = 2^{-8}\) as well as \(h_{\mathrm {max}} = 2^{-7}\).

In Fig. 4 we compare the evolution for the stochastic Cahn–Hilliard equation for \(\varepsilon = 1/(32\pi )\), \(\varepsilon = 1/(64\pi )\) on a uniform mesh with \(h = 2^{-7}\), \(h = 2^{-8}\), respectively, with the evolution of the stochastic Mullins–Sekerka problem (6.4) on uniform meshes with \(h = 2^{-7}\), \(h = 2^{-8}\), respectively, for a single realization of the noise. We also include results for \(\varepsilon = 1/(128\pi )\), \(\varepsilon = 1/(512\pi )\), where to make the computations feasible we employ the adaptive algorithm with \(h_{\mathrm {max}}=2^{-8}\) and \(h_{\mathrm {max}}=2^{-9}\), \(h_{\mathrm {max}}=2^{-11}\), respectively. Furthermore, in order to ensure convergence of the Newton solver for \(\varepsilon = 1/(512\pi )\) we decrease the time-step size \(k=10^{-6}\). To be able to directly compare with the results for \(\varepsilon = 1/(512\pi )\), we take the values of the realization of the noise generated with step size \(k=10^{-5}\), which was used in the other simulations, and to obtain values at the intermediate time levels we employ linear interpolation in time. We observe that the results in Fig. 4 for the stochastic Mullins–Sekerka model are more sensitive to the mesh size, i.e., the graph for the mesh with \(h = 2^{-7}\) differs significantly from the remaining results. For the mesh with \(h_{\mathrm {min}} = 2^{-8}\) the results for the stochastic Mullins–Sekerka model are in good agreement with the results for the stochastic Cahn–Hilliard model. We note that for values smaller than \(\varepsilon = 1/(128\pi )\) we do not observe significant improvements of the approximation of the stochastic Mullins–Sekerka problem. This is likely caused by the discretization errors in the numerical approximation of the stochastic Mullins–Sekerka model which, for small values of \(\varepsilon \), are greater than the approximation error w.r.t. \(\varepsilon \) in the stochastic Cahn–Hilliard equation.

Fig. 4
figure 4

Oscillations of the interface along the x-axis (x, 0) on uniform meshes for the stochastic Cahn–Hilliard equation with \(\varepsilon = 1/(32\pi )\), \(h = 2^{-7}\), \(\varepsilon = 1/(64\pi )\), \(h = 2^{-8}\), \(\varepsilon = 1/(128\pi )\), \(h_{\mathrm {min}} = 2^{-9}\), \(\varepsilon = 1/(512\pi )\), \(h_{\mathrm {min}} = 2^{-11}\) and for the stochastic Mullins–Sekerka problem with \(h = 2^{-7}\) and \(h = 2^{-8}\) with the noise (6.2) (top left); detail of the evolution (top right); evolution of the zero level-set of the solution (bottom middle)

Fig. 5
figure 5

Oscillations of the ”compensated” level-set along the x-axis (x, 0) with adaptive mesh refinement with \(h_{\mathrm {max}}=2^{-6}\) for stochastic Cahn–Hilliard equation with \(\varepsilon = 1/(32\pi )\), \(h_{\mathrm {min}} = 2^{-7}\), \(\varepsilon = 1/(64\pi )\), \(h_{\mathrm {min}} = 2^{-8}\), \(\varepsilon = 1/(128\pi )\), \(h_{\mathrm {min}} = 2^{-9}\), and the stochastic Mullins–Sekerka problem with \(h_{\mathrm {min}} = 2^{-8}\), \(h_{\mathrm {max}}=2^{-6}\) with the noise (6.2) (left picture); evolution of the corresponding zero level-set (right picture)

From the above numerical results we conjecture that for \(\varepsilon \downarrow 0\) the solution of the stochastic Cahn–Hilliard equation with a non-vanishing noise term (\(\gamma =0\)) converges to the solution of a stochastic Mullins–Sekerka problem (6.4). Formally, the stochastic Mullins–Sekerka problem (6.4) can be obtained as a sharp-interface limit of a generalized Cahn–Hilliard equation where the noise is treated as a deterministic function \(G_1(t) = g\, {\dot{W}}(t)\), cf. (2.3) in [3] and (1.12) in [4].

To examine the robustness of previous results with respect to adaptive mesh refinement we recompute the previous problems with the noise (6.2) using the adaptive mesh refinement algorithm with \(h_{\mathrm {max}}=2^{-6}\) and \(h_{\mathrm {min}}=\frac{\pi }{4}\varepsilon \). The stochastic Mullins–Sekerka model is computed with \(h_{\mathrm {max}}=2^{-6}\) and the mesh is refined along the interface \(\Gamma \) with mesh size \(h_{\mathrm {min}}=2^{-8}\).

We note that with adaptive mesh refinement the results differ from those computed using uniform meshes, since the noise (6.2) is mesh dependent. For instance, in the regions with coarse mesh the noise (6.2) is not properly resolved. The computed results with the adaptive mesh refinement can be interpreted as replacing the additive noise (6.2) with a multiplicative type noise that has lower intensity when \(u\approx \pm 1\). The presented computations contain an additional ”geometric” factor in the numerical error that is due to the fact that the mesh is adapted according to the position of the interface, as well as due to the fact that the adaptive mesh refinement algorithm for the Mullins–Sekerka problem is different. Nevertheless, the results are still in good agreement with the stochastic Mullins–Sekerka problem, see Fig. 5. In particular we observe that the convergence for smaller values of \(\varepsilon \) is more obvious for the zero level-set of the solution than in the case of uniform meshes. In Fig. 5 we also include a graph (’ftilde’ in pink) which was computed using a modification of scheme (6.1) with \(\bigl (f(X^j_h),\psi _h \bigr )\) replaced by \(\bigl ({\tilde{f}}(X^j_h, X^{j-1}_h),\psi _h \bigr )\) where \({\tilde{f}}(X^j_h, X^{j-1}_h) = \frac{1}{2}(|X^j_h|^2-1)(X^j_h + X^{j-1}_h)\); for equal time-step size the modified scheme provides worse approximation of the Mullins–Sekerka problem due to numerical damping.