1 Introduction

Stochastic partial differential equations with fractional Laplacian appear in many fields such as physics, fractal medium, image analysis, risk management and other fields (see Debbi [4], Mueller [12] and Xie [15]). In this paper, we discuss the existence and uniqueness of solutions to the initial value problem for the following stochastic partial differential equation which is denoted by SPDE:

$$\begin{aligned} \left\{ \begin{aligned} \frac{\partial X}{\partial t}(t,x)&=-(-\Delta )^{\alpha /2}X(t,x)+|X(t,x)|^\gamma {\dot{W}}(t,x),\quad t\ge 0,~ x\in {\mathbb {R}},\\ X(0,x)&=f(x), \end{aligned}\right. \end{aligned}$$
(1.1)

where \(-(-\Delta )^{\alpha /2}\) is the fractional Laplacian with order \(1<\alpha <2\), f is an initial value, \({\dot{W}}(t,x)\) is a space-time white noise on \([0,\infty )\times {\mathbb {R}}\), and \(\gamma \) is a parameter satisfying \(1/2<\gamma <1\).

We will explain some known results related to (1.1). Consider the case where \(\alpha =2\) and coefficients of noise are Lipschitz continuous, that is,

$$\begin{aligned} \left\{ \begin{aligned} \frac{\partial X}{\partial t}(t,x)&=\Delta X(t,x)+a(X(t,x)){\dot{W}}(t,x),\quad t\ge 0,~ x\in {\mathbb {R}},\\ X(0,x)&=f(x), \end{aligned}\right. \end{aligned}$$
(1.2)

where \(a:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is Lipschitz continuous. Equation (1.2) was studied by Funaki [7] and Walsh [18]. They showed the unique existence of a solution to (1.2) and spatial regularity of the solution. Later Mueller–Perkins [11] and Shiga [17] studied the SPDE (1.2) without assuming the Lipschitz continuty of coefficients a. They imposed the following growth condition on a(u):

$$\begin{aligned} |a(u)|\le C(|u|+|u|^\theta ),\quad u\in {\mathbb {R}} \end{aligned}$$
(1.3)

for some \(0<\theta <1\). They showed the existence of probability space \(({\bar{\Omega }},\bar{{\mathcal {F}}},{\bar{P}})\) on which there is a space-time white noise \(\dot{{\bar{W}}}(t,x)\) such that (1.2) with \({\dot{W}}(t,x)\) replaced by \(\dot{{\bar{W}}}(t,x)\) has a mild solution X(tx). In addition, they proved that, if f is positive, then any solution X(tx) to (1.2) is positive for all \(t\ge 0\) and \(x\in {\mathbb {R}}\). Moreover, it was shown that, if f(x) is sufficiently rapidly decreasing as \(x\rightarrow \infty \), then any solution X(tx) to (1.2) is also sufficiently rapidly decreasing. Mytnik [13] proved the weak uniqueness of solutions to the following equation which is a special one of (1.2):

$$\begin{aligned} \frac{\partial X}{\partial t}(t,x)=\Delta X(t,x)+|X(t,x)|^\gamma {\dot{W}}(t,x),\quad t\ge 0,~ x\in {\mathbb {R}}, \end{aligned}$$
(1.4)

with \(1/2<\gamma <1\). The idea of his proof is based on a duality argument developed by Ethier–Kurtz [5]. Mytnik [14] studied the dual process Y described by the following SPDE:

$$\begin{aligned} \frac{\partial Y}{\partial t}(t,x)=\Delta Y(t,x)+|Y(t,x)|^{1/\gamma }{\dot{L}}(t,x),\quad t\ge 0,~x\in {\mathbb {R}}, \end{aligned}$$
(1.5)

where \({\dot{L}}\) is a stable noise on \({\mathbb {R}}\times {\mathbb {R}}_+\) with nonnegative jumps. He coordinated a probability space \(({\tilde{\Omega }},\tilde{{\mathcal {F}}},{\tilde{P}})\) on which there exists a stable noise \({\dot{L}}\) and random field Y(tx), where Y(tx) is a mild solution of (1.5).

Recently SPDEs with fractional Laplacian have been discussed by several authors (see; e.g. Chen [1], Debbi [4], Niu–Xie [15]). Consider (1.2) with \(\Delta \) replaced by \(-(-\Delta )^{\alpha /2}\) for \(1<\alpha <2\) and Lipschitz continuous coefficients of noise, that is,

$$\begin{aligned} \left\{ \begin{aligned} \frac{\partial X}{\partial t}(t,x)&=-(-\Delta )^{\alpha /2} X(t,x)+a(X(t,x)){\dot{W}}(t,x),\quad t\ge 0,~ x\in {\mathbb {R}},\\ X(0,x)&=f(x), \end{aligned}\right. \end{aligned}$$
(1.6)

where \(a:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is Lipschitz continuous. Chen [1], Debbi [4] and Niu–Xie [15] have shown the unique existence of mild solution to (1.6) and its regularity.

However, it is still an open problem to show the existence and uniqueness of solution to (1.6) without assuming the Lipschitz continuty of a(u). The purpose of this paper is to establish the existence and weak uniqueness of a mild solution to (1.1) which is a special case of (1.6) where \(a(u)=|u|^\gamma \) with \(\frac{1}{2}<\gamma <1\). We intend to use similar aguments to Mueller–Perkins [11], Shiga [17], and Mytnik [13]. The large difference between the fractional Laplacian and the usual Laplacian can be found in the decay properties of fundamental solutions as \(x\rightarrow \infty \). Since the fundamental solution of the usual Laplacian has an exponential decay property, the solution X(tx) to (1.2) has the exponential decay property if the condition (1.3) holds. Similarly the fundamental solution of fractional Laplacian decays polynomially, we can show that the solution to (1.1) has the polinomial decay property if the condition (1.3) holds. To prove the uniqueness of solution to (1.4) Mytnik [13] used the exponential decay property. It is enough to use the polynomial decay property in order to prove the uniqueness of solution to (1.1).

This paper is organized as follows: in Sect. 2 we prepare some tools and lemma to prove our main results. In Sect. 3, we show the existence and uniqueness of solution by applying the Banach fixed point theorem with Lipschitz continuous coefficients a(X). In addition, we follow the argument of Mueller–Perkins [11] and Shiga [17] to prove positivity and polynomial decay properties of solutions to (1.6). From Sect. 4 we begin with consideration (1.1). We prove the existence of solution to (1.1) by tightness arguments. In fact, we can prove a uniqueness of solution in the distributional sense by applying to duality method. Since the proof is almost same as the paper [5], we omit the detail.

2 Preliminaries

2.1 Fractional differential operator

For \(1<\alpha \le 2\), let \(-(-\Delta )^{\alpha /2}\) be a fractional differential operator defined by the Fourier transform \({\mathcal {F}}\):

$$\begin{aligned} {\mathcal {F}}((-\Delta )^{\alpha /2}X)(\xi )=|\xi |^\alpha {\mathcal {F}}(X)(\xi ),~~~X\in D((-\Delta )^{\alpha /2}),~\xi \in {\mathbb {R}}, \end{aligned}$$

where

$$\begin{aligned} {\mathcal {F}}(X)(\xi )=\int _{\mathbb {R}}X(x)e^{-ix\xi }dx. \end{aligned}$$

Let G(tx) be a fundamental solution to the Cauchy problem

$$\begin{aligned} \left\{ \begin{aligned} \frac{\partial G}{\partial t}(t,x)&=-(-\Delta )^{\alpha /2}G(t,x),\\ G(0,x)&=\delta _0(x), \end{aligned} \right. \end{aligned}$$
(2.1)

where \(\delta _0\) denotes the Dirac measure. Then G(tx) can be expressed using the Fourier transform:

$$\begin{aligned} G(t,x)=\int _{\mathbb {R}}\exp \left\{ -i\xi x-t|\xi |^\alpha \right\} d\xi \end{aligned}$$

and has the following properties (cf. Debbi–Dozzi [4] and Kotelenez [9]):

Lemma 2.1

Let G(tx) be the fundamental solution of (2.1) and define

$$\begin{aligned} \lambda (x)=(1+|x|^2)^{\frac{1}{2}},~~~x\in {\mathbb {R}}. \end{aligned}$$

Then there exists a constant \(C_\alpha >0\) such that for all \(0\le s \le t\le T,~x\in {\mathbb {R}}\), and \(0<\rho \le (\alpha +1)/2\), the following properties hold:

$$\begin{aligned}&\mathrm{(i)}\frac{\partial ^n G}{\partial x^n}(t,x)=t^{-\frac{n+1}{\alpha }}\frac{\partial ^nG(1,\xi )}{\partial \xi ^n}|_{\xi =t^{-\frac{1}{\alpha }}x}.\\&\mathrm{(ii)}\int _{{\mathbb {R}}}G(t,x)\lambda ^\rho (x)dx<\infty .\\&\mathrm{(iii)}|G(1,x)|\le C_\alpha (1+|x|^{1+\alpha })^{-1}.\\ \\&\mathrm{(iv)}\left| \frac{\partial ^n G}{\partial x^n}(1,x)\right| \le C_\alpha \frac{1+|x|^{\alpha +n-1}}{(1+|x|^{\alpha +n})^2}. \end{aligned}$$

Hereafter, we sometimes write \(G(t,x-y)=G(t,x,y)\). Note that there exists a constant \(C_1,C_2>0\) such that \(\lambda ^{-\rho }(y)\le C_1\lambda ^{\rho }(x-y)\lambda ^{-\rho }(x)\) and from Lemma 2.1 (ii) we obtain the estimate

$$\begin{aligned} \int _{\mathbb {R}}G(t,x,y)\lambda ^{-\rho }(y)dy\le C_2\lambda ^{-\rho }(x). \end{aligned}$$

2.2 Definition of the solution

Let \((\Omega ,{\mathcal {F}},{\mathcal {F}}_t,P)\) be a complete probability space with filtration and \({\dot{W}}(t,x)\) be an \(\{{\mathcal {F}}_t\}\)-space-time Gaussian white noise with covariance given by

$$\begin{aligned} E[{\dot{W}}(t,x){\dot{W}}(t',x')]=\delta (t-t')\delta (x-x') \end{aligned}$$

for \(t,t'\ge 0\) and \(x,x'\in {\mathbb {R}}\). For an \(\{{\mathcal {F}}_t\}\)-predictable functional \(\phi (t,x,\omega ):[0,\infty )\times {\mathbb {R}}\times \Omega \rightarrow {\mathbb {R}}\) satisfying

$$\begin{aligned} E\left[ \int _0^t\int _{\mathbb {R}}\phi ^2(s,x,\omega )dxds\right] <\infty ~~~\mathrm{for~all~}t>0, \end{aligned}$$
(2.2)

we can define stochastic integral (cf. Walsh [18])

$$\begin{aligned} \int _0^t\int _{\mathbb {R}}\phi (s,x,\omega )W(dx,ds), \end{aligned}$$

with quadratic variational process

$$\begin{aligned} \int _0^t\int _{\mathbb {R}}\phi ^2(s,x,\omega )dxds. \end{aligned}$$

The Eq. (1.1) makes sense if we integrate the equation in time and space and use the initial condition.

Definition 2.1

An \(({\mathcal {F}}_t)\)-adapted random field \(\{X(t,x),t\ge 0,x\in {\mathbb {R}}\}\) is said to be a solution in the sense of generalized functions of (1.1) if for any \(\phi \in C_0^\infty ({\mathbb {R}})\), the following equality holds:

$$\begin{aligned} \int _{\mathbb {R}}X(t,x)\phi (x)dx&=\int _{\mathbb {R}}f(x)\phi (x)dx-\int _0^t\int _{\mathbb {R}}X(s,x)(-\Delta )^{\alpha /2}\phi (x)dxds\nonumber \\&\quad +\int _0^t\int _{\mathbb {R}}|X(s,x)|^\gamma \phi (x)W(dx,ds)\quad \mathrm{a.s.} \end{aligned}$$
(2.3)

Using the Green function, we can describe a solution of (1.1) in a mild form.

Definition 2.2

An \(({\mathcal {F}}_t)\)-adapted random field \(\{X(t,x),t\ge 0,x\in {\mathbb {R}}\}\) is said to be a mild solution of (1.1) with initial function f if the following stochastic integral equation holds:

$$\begin{aligned} X(t,x)=\int _{\mathbb {R}}G(t,x,y)f(y)dy+\int _0^t\int _{\mathbb {R}}G(t-s,x,y)|X(s,y)|^\gamma W(dy,ds)~~~\mathrm{a.s.},\nonumber \\ \end{aligned}$$
(2.4)

where G(txy) denotes the Green function of (2.1).

We introduce a martingale problem induced by (1.1).

Definition 2.3

Let S be a Banach space. A solution to martingale problem for (1.1) we mean a measurable process X with values in S defined on some probability space \((\Omega ,{\mathcal {F}},P,\{{\mathcal {F}}_t\})\) with a filtration satisfyng for all \(\phi \in {\mathcal {D}}(-\Delta )^{\alpha /2}\)

$$\begin{aligned} \langle X_t,\phi \rangle -\langle X_0,\phi \rangle +\int _0^t\langle X_s,(-\Delta )^{\alpha /2}\phi \rangle ds \end{aligned}$$
(2.5)

is an \({\mathcal {F}}_t^X\) square integrable martingale with the quadratic variation given by

$$\begin{aligned} \int _0^t\int _{\mathbb {R}}|X(s,y)|^{2\gamma }\phi (y)^2dyds. \end{aligned}$$

Definition 2.4

Let S be a Banach space and \(X_1\) and \(X_2\) be S-valued mild solutions to the SPDE (1.1) with the same initial value. We say that the SPDE (1.1) has pathwise uniqueness if

$$\begin{aligned} P(|X_1(t,\cdot )-X_2(t,\cdot )|_S=0,\quad {\forall }t\ge 0)=1. \end{aligned}$$

holds.

To this end, it is required that all the terms in (2.4) and (2.3) are well defined. Here, a relationship between a solution in the sense of generalized functions and a mild solution is well known (cf. [18]).

Proposition 2.1

A solution in the sense of generalized functions of (1.1) is equivalent to the mild solution.

A solution to the martigale problem and a mild solution in weak sense are equivalent (cf. [7]).

Proposition 2.2

The following (1) and (2) are equivalent.

  1. (1)

    X(tx) is a solution to the martigale problem for (1.1).

  2. (2)

    There exists an \(\{{\mathcal {F}}_t\}\)-space-time Gaussian white noise \({\dot{W}}(t,x)\) and stochastic process X(tx) such that X(tx) is a mild solution of the SPDE (1.1) on a suitable probability space with filtration \((\Omega ,{\mathcal {F}},P,\{{\mathcal {F}}_t\}).\)

3 Some properties of solutions in Lipschitz case

3.1 Existence and uniqueness of mild solutions

In order to prove the existence and uniqueness to SPDE (1.1), we first consider the case where coefficients are Lipschitz continuous:

$$\begin{aligned} \frac{\partial X_t}{\partial t}=-(-\Delta )^{\alpha /2}X_t+a(X_t){\dot{W}}(t,x) \end{aligned}$$
(3.1)

where \(a:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is Lipschitz continuous. Using the Green function, we can write the solution of Eq. (3.1) in a mild form:

$$\begin{aligned} X(t,x)=\int _{\mathbb {R}}G(t,x,y)f(y)dy+\int _0^t\int _{\mathbb {R}}G(t-s,x,y)a(X(s,y))W(dy,ds). \end{aligned}$$
(3.2)

For any \(0<\rho <(\alpha +1)/2\), define a weighted \(L^2\)-norm defined by

$$\begin{aligned} \Vert X\Vert _{L^2_{\rho }}=\left( \int _{\mathbb {R}}|X(x)|^2\lambda ^{-\rho }(x)dx\right) ^{1/2}. \end{aligned}$$

Theorem 3.1

Assume that \(f\in L^2_\rho ({\mathbb {R}})\) and \(a:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a Lipschitz function satisfying linear growth condition, that is, there exists \(C>0\) such that

$$\begin{aligned} |a(u)|\le C(1+|u|),~~~\mathrm{for~all~}u\in {\mathbb {R}}. \end{aligned}$$
(3.3)

Then (3.2) admits the pathwise unique mild solution \(X(t,x)_{(t,x)\in [0,T]\times {\mathbb {R}}}\) such that for each \(T>0\),

$$\begin{aligned} \sup _{0\le t\le T}E[\Vert X(t,\cdot )\Vert _{L^2_\rho }^2]<\infty . \end{aligned}$$

For every \(0<\rho <(\alpha +1)/2\), define a function space

$$\begin{aligned} C_\rho ({\mathbb {R}}):=\left\{ g\in C({\mathbb {R}}):~\sup _{x\in {\mathbb {R}}}\lambda ^\rho (x)g(x)<\infty \right\} . \end{aligned}$$

The regularity of solution to (3.1) is well known (cf. [4], [15]).

Theorem 3.2

Under the conditions of Theorem 3.1 with the initial condition \(f\in C_\rho ({\mathbb {R}})\), the solution X(tx) of (3.1) has the continuous modification on \([0,\infty )\times {\mathbb {R}}\). In addition, the solution X(tx) to (3.1) has the \(\beta \)-Hölder continuous modification for \(t\in [0,\infty )\) and \(\eta \)-Hölder continuous modification for \(x\in {\mathbb {R}}\) with \(\beta \in (0,\frac{\alpha -1}{2\alpha })\) and \(\eta \in (0,\frac{\alpha -1}{2})\).

3.2 Positivity of solution

We will follow Shiga’s arguments [17] to prove the positivity of the solution to (3.1). At first, we need to prepare a boundness of the solution to (3.1).

Lemma 3.1

Assume that for every \(T>0\) there exists \(C_T>0\) and such that

$$\begin{aligned} |a(u)|\le C_T(|u|+|u|^{1/2}),\quad u\in {\mathbb {R}}. \end{aligned}$$
(3.4)

Then for \(p\ge 1\) can be written by \(p=2^m,~m\in {\mathbb {N}}\cup {\{0\}}\) and \(0<\rho <(\alpha +1)/2\), the mild solution X(tx) to (3.2) with initial condition \(f\in C_\rho ({\mathbb {R}})\) satisfies

$$\begin{aligned} \sup _{0\le t\le T}\int _{\mathbb {R}}E[|X(t,x)|^p]\lambda ^\rho (x)dx<\infty . \end{aligned}$$
(3.5)

Proof

Assume that \(p=1\). Then for every \(0<\rho <(\alpha +1)/2\) and \(t\ge 0\), we have

$$\begin{aligned} \int _{\mathbb {R}}E[|X(t,x)|\lambda ^\rho (x)dx]= & {} \int _{\mathbb {R}}\left| \int _{\mathbb {R}}G(t,x,y)f(y)\lambda ^\rho (x)dy\right| dx\nonumber \\\le & {} C \int _{\mathbb {R}}\left| f(y)\right| \lambda ^\rho (y)dy<\infty . \end{aligned}$$
(3.6)

We will apply an induction argument. For \(p\ge 2\), using the Burkhorder’s inequality, the Hölder’s inequality shows that

$$\begin{aligned}&E[|X(t,x)|^p]\\&\quad \le C_p\left\{ \left| \int _{\mathbb {R}}G(t,x,y)f(y)dy\right| ^p+E\left| \int _0^t\int _{\mathbb {R}}G(t-s,x,y)^2a(X(s,y))^2dyds\right| ^{p/2}\right\} \\&\quad \le C_{p,T}\left\{ \left| \int _{\mathbb {R}}G(t,x,y)f(y)dy\right| ^p+E\int _0^t\int _{\mathbb {R}}G(t-s,x,y)^2\left| a(X(s,y))\right| ^pdyds\right\} .\\&\quad \le C_{p,T}\Biggl \{\left| \int _{\mathbb {R}}G(t,x,y)f(y)dy\right| ^p\\&\qquad +E\int _0^t\int _{\mathbb {R}}G(t-s,x,y)^2[|X(s,y)|^p+|X(s,y)|^{p/2}]dyds\Biggl \}. \end{aligned}$$

Multiply both sides by \(\lambda ^\rho (x)\) and integrate with variable x, we obtain that

$$\begin{aligned}&\int _{\mathbb {R}}E[|X(t,x)|^p]\lambda ^\rho (x)dx\\&\quad \le C_{p,T}\Biggl \{1+E\int _0^t\int _{\mathbb {R}}[|X(s,y)|^p+|X(s,y)|^{p/2}]\int _{\mathbb {R}}G(t-s,x,y)^2\lambda ^\rho (x)dxdyds\Biggl \}\\&\quad \le C_{p,T}\left\{ 1+E\int _0^t\int _{\mathbb {R}}[|X(s,y)|^p+|X(s,y)|^{p/2}](t-s)^{-1/\alpha }\lambda ^\rho (y)dyds\right\} \\&\quad \le C_{p,T}\left\{ 1+\sup _{0\le s \le t}\int _{\mathbb {R}}E[|X(s,y)|^p+|X(s,y)|^{p/2}]\lambda ^\rho (y)dy\int _0^t(t-s)^{-1/\alpha }ds\right\} . \end{aligned}$$

Setting \(p=2\), (3.6) and Grownwall’s lemma imply that

$$\begin{aligned} \sup _{0\le t\le T}\int _{\mathbb {R}}E[|X(t,x)|^2]\lambda ^\rho (x)dx<\infty . \end{aligned}$$

From an induction argument, we complete the proof for \(p=2^m\) with \(m\in {\mathbb {N}}\cup \{0\}\). \(\square \)

Remark 3.1

We can get in a similarly way to the proof of Lemma 3.1 and Hölder’s inequality together,

$$\begin{aligned} \sup _{0\le t\le T}\sup _{x\in {\mathbb {R}}}E[|X(t,x)|^p\lambda (x)^\rho ]<\infty ~~~\mathrm{for ~all~}p\ge 2. \end{aligned}$$
(3.7)

For any \(0<\rho <(\alpha +1)/2\) define the function space \(C_\rho ^+({\mathbb {R}})\) by

$$\begin{aligned} C_\rho ^+({\mathbb {R}}):=\left\{ g\in C({\mathbb {R}}):~g\ge 0,~\sup _{x\in {\mathbb {R}}}\lambda ^\rho (x)g(x)<\infty \right\} . \end{aligned}$$

Theorem 3.3

Let X(tx) be the mild solution of (3.2) with the initial function \(f\in C_\rho ^+({\mathbb {R}})\). Assume that a(u) satisfies the condition (3.4). In particular, \(a(0)=0\). Then, we have

$$\begin{aligned} P(X(t,x)\ge 0;~t\ge 0,x\in {\mathbb {R}})=1. \end{aligned}$$

Proof

For every \(\varepsilon >0\), choose non-negative, symmetric and smooth function \(\rho _\varepsilon (x)\) such that

$$\begin{aligned} \rho _\varepsilon (x)=0~\mathrm{for}~|x|\ge \varepsilon ~\mathrm{and}~\int _{{\mathbb {R}}}\rho _\varepsilon ^2(x)dx=1. \end{aligned}$$

Define

$$\begin{aligned} W_x^\varepsilon (t)=\int _0^t\int _{{\mathbb {R}}}\rho _\varepsilon (x-y)W(dy,ds). \end{aligned}$$

Notice that, for every \(x\in {\mathbb {R}}\), \(W_x^\varepsilon (t)\) is a one-dimensional Brownian motion. Set \(\Delta _\varepsilon =(G(\varepsilon )-I)/\varepsilon \) for \(\varepsilon >0\) where \(G(\varepsilon )f(x)=\left( G(\varepsilon ,\cdot )*f\right) (x).\) We consider the following equation

$$\begin{aligned} X_\varepsilon (t,x)=f(x)+\int _0^t\left( \Delta _\varepsilon X_\varepsilon (s,x)\right) ds+\int _0^ta\left( X_\varepsilon (s,x)\right) dW_x^\varepsilon (s). \end{aligned}$$

Since a(u) is Lipschitz continuous and \(\Delta _\varepsilon \) is a bounded operator on \(L^2({\mathbb {R}})\), the above equation has the unique strong solution and continuous version(cf. [2]). From now on, we claim that

$$\begin{aligned} P\left( X_\varepsilon (t,\cdot )\ge 0,~~{\forall }t\ge 0\right) =1. \end{aligned}$$
(3.8)

Let \(a_n=-2(n^2+n+2)^{-1}\) be a non-increasing sequence. Immediately, we obtain that \(a_n\rightarrow 0\) as \(n\rightarrow \infty \) and \(\int _{a_{n-1}}^{a_n}x^{-2}dx=n\). Let \(\psi _n(x)\) be a nonnegative continuous function such that

$$\begin{aligned} \mathrm{supp}(\psi _n)=(a_{n-1},a_n),~~0\le \psi _n(x)\le \frac{2}{nx^2}~~~\mathrm{and}~~~\int _{a_{n-1}}^{a_n}\psi _n(x)dx=1. \end{aligned}$$

Define

$$\begin{aligned} \Phi _n(x)=\int _0^x\int _0^y\psi _n(z)dzdy. \end{aligned}$$

Then we can get \(\Phi _n(x)\in C^2({\mathbb {R}})\) with \(\Phi ^{\prime \prime }_n(x)=\psi _n(x)~\), \(-1\le \Phi _n^{\prime }(x)=\int _0^x\psi _n(z)dz\le 0\) for \(x<0\) and \(\Phi _n(x)=0\) for \(x\ge 0\). Note that, for \(x<0\) there exists \(n_0\in {\mathbb {N}}\) such that for all \(n\ge n_0\) we have \(a_n> x\). About \(\Phi _n\) we can get the following properties as \(n\rightarrow \infty \),

$$\begin{aligned} \Phi _n(x)\rightarrow -\min \{x,0\}=:\phi (x),~~\Phi ^{\prime }_n(x)\rightarrow -1(x<0). \end{aligned}$$

By \(\mathrm{It{\hat{o}}^{\prime }s}\) formula

$$\begin{aligned} \Phi _n(X_\varepsilon (t,x))= & {} \Phi _n(f(x))+\int _0^t\Phi ^{\prime }_n(X_\varepsilon (s,x))\Delta _\varepsilon X_\varepsilon (s,x)ds\\&+\int _0^t\Phi _n^{\prime }(X_\varepsilon (s,x))a(X_\varepsilon (s,x))dW_x^\varepsilon (s)\\&+\frac{1}{2}\int _0^t\Phi ^{\prime \prime }(X_\varepsilon (s,x))a^2(X_\varepsilon (s,x))ds. \end{aligned}$$

From the Lipschitz condition and \(a(0)=0\),

$$\begin{aligned} \Phi ^{\prime \prime }(X_\varepsilon (s,x))a(X_\varepsilon (s,x))^2\le C\Phi ^{\prime \prime }(X_\varepsilon (s,x))X_\varepsilon ^2(s,x)\le C/n. \end{aligned}$$

Hence

$$\begin{aligned} E[\Phi _n(X_\varepsilon (s,x))]\le E\int _0^t\Phi ^{\prime }_n(X_\varepsilon (s,x))\Delta _\varepsilon X_\varepsilon (s,x)ds+\frac{C}{n}t. \end{aligned}$$

Taking the limit as \(n\rightarrow \infty \) and by monotone convergence theorem

$$\begin{aligned} E[\phi (X_\varepsilon (t,x))]\le & {} E\int _0^t-1(X_\varepsilon (s,x)<0)\Delta _\varepsilon X_\varepsilon (s,x)ds\\= & {} \frac{1}{\varepsilon }\int _0^tE\left[ 1(X_\varepsilon<0)X_\varepsilon (s,x)\right] ds\\&-\frac{1}{\varepsilon }\int _0^t\int _{{\mathbb {R}}}G(\varepsilon ,x,y)E\left[ 1(X_\varepsilon (s,x)<0)X_\varepsilon (s,y)\right] dyds \end{aligned}$$

For the second term, we notice that

$$\begin{aligned}&-\frac{1}{\varepsilon }\int _0^t\int _{{\mathbb {R}}}G(\varepsilon ,x,y)E\left[ 1(X_\varepsilon (s,x)<0)X_\varepsilon (s,y)\right] dyds\\&\quad \le -\frac{1}{\varepsilon }\int _0^t\int _{{\mathbb {R}}}G(\varepsilon ,x,y)E\left[ 1(X_\varepsilon (s,x)<0,X_\varepsilon (s,y)<0)X_\varepsilon (s,y)\right] dyds\\&\quad =\frac{1}{\varepsilon }\int _0^t\int _{{\mathbb {R}}}G(\varepsilon ,x,y)E\left[ 1(X_\varepsilon (s,x)<0,X_\varepsilon (s,y)<0)\left| X_\varepsilon (s,y)\right| \right] dyds\\&\quad \le \frac{1}{\varepsilon }\int _0^t\int _{{\mathbb {R}}}G(\varepsilon ,x,y)E\left[ 1(X_\varepsilon (s,y)<0)|X_\varepsilon (s,y)|\right] dyds. \end{aligned}$$

Then, since \(|x|1(x<0)=\phi (x)\), we get

$$\begin{aligned} E\left[ \phi (X_\varepsilon (t,x))\right] \le \frac{1}{\varepsilon }\int _0^t\int _{{\mathbb {R}}}G(\varepsilon ,x,y)E\left[ \phi (X_\varepsilon (s,y))\right] dyds. \end{aligned}$$

Therefore, by Grownwall’s lemma for \(\sup _{x\in {\mathbb {R}}}E(\phi (X_\varepsilon (t,x)))\), we obtain

$$\begin{aligned} E[\phi (X_\varepsilon (t,x))]=0 \end{aligned}$$

for every \(t>0\) and \(x\in {\mathbb {R}}\), which yields (3.8). Let

$$\begin{aligned} G_\varepsilon (t)= & {} \exp t\Delta _\varepsilon =e^{-t/\varepsilon }\sum _{n=0}^\infty \frac{(t/\varepsilon )^n}{n!}G(n\varepsilon )=e^{-t/\varepsilon }I+R_\varepsilon (t)\\ R_\varepsilon (t,x,y)= & {} e^{-t/\varepsilon }\sum _{n=1}^\infty \frac{(t/\varepsilon )^n}{n!}G(n\varepsilon ,x,y). \end{aligned}$$

We will prove that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\sup _{0\le t\le T}\sup _{x\in {\mathbb {R}}}E|X_\varepsilon (t,x)-X(t,x)|^2=0. \end{aligned}$$

To prove this fact, we need the following lemma (cf. Appendix of [1]).

Lemma 3.2

\(\mathrm{(i)}\) :
$$\begin{aligned} \int _{\mathbb {R}}|R_\varepsilon (t,x,y)-G(t,x,y)|dy\le e^{-t/\varepsilon }+C(\varepsilon /t)^{1/2}~~~{\forall }t>0,\varepsilon >0. \end{aligned}$$
\(\mathrm{(ii)}\) :

For some \(\alpha >0\) and \(\beta >0\)

$$\begin{aligned} \int _{\mathbb {R}}R_\varepsilon (t,x,y)^2dy\le Ct^{-1/\alpha }~~~{\forall }>0. \end{aligned}$$
\(\mathrm{(iii)}\) :
$$\begin{aligned} \lim _{\varepsilon \rightarrow \infty }\int _0^t\int _{\mathbb {R}}(R_\varepsilon (t,x,y)-G(t,x,y))^2=0~~~{\forall }t>0,x\in {\mathbb {R}}. \end{aligned}$$

Notice that, \(X_\varepsilon (t,x)\) can be written in the following mild form:

$$\begin{aligned}&X_\varepsilon (t,x)=\int _{{\mathbb {R}}}G_\varepsilon (t,x,y)f(y)dy+\int _0^t\int _{{\mathbb {R}}}e^{-\frac{(t-s)}{\varepsilon }}a(u_\varepsilon (s,x))dW_x^\varepsilon (s)\\&\quad +\int _0^t\int _{{\mathbb {R}}}R_\varepsilon (t-s,x,y)a(X_\varepsilon (s,y))dW_y^\varepsilon (s) \end{aligned}$$

where the last term equals to

$$\begin{aligned} \int _0^t\int _{\mathbb {R}}\left( \int _{\mathbb {R}}R_\varepsilon (t-s,x,z)a(X_\varepsilon (s,z))\rho _\varepsilon (y-z)dz\right) W(dy,ds). \end{aligned}$$

Since f is bounded, it follows that for every \(T>0\) (see Remark 3.1),

$$\begin{aligned}&\sup _{0<\varepsilon \le 1}\sup _{0\le t\le T}\sup _{x\in {\mathbb {R}}}E\left[ |X_\varepsilon (t,x)|^2\right] <\infty \end{aligned}$$
(3.9)
$$\begin{aligned}&\sup _{0\le t\le T}\sup _{x\in {\mathbb {R}}}E\left[ |X(t,x)|^2\right] <\infty . \end{aligned}$$
(3.10)

Then we have

$$\begin{aligned}&E\left[ |X_\varepsilon (t,x)-X(t,x)|^2\right] \le C\Biggl \{G_\varepsilon (t)f(x)-G(t)f(x)|^2\\&\qquad +E\left[ \int _0^te^{-2(t-s)/\varepsilon }|a(X_\varepsilon (s,x))|^2ds\right] \\&\qquad +\int _0^t\int _{\mathbb {R}}E\left[ \left| \int _{\mathbb {R}}R_\varepsilon (t-s,x,y)[a(X_\varepsilon (s,z))-a(X(s,z))]\rho _\varepsilon (y-z)dz\right| ^2dyds\right] \\&\qquad +\int _0^t\int _{\mathbb {R}}E\left[ \left| \int _{\mathbb {R}}R_\varepsilon (t-s,x,y)[a(X(s,z))-a(X(s,y))]\rho _\varepsilon (y-z)dz\right| ^2dyds\right] \\&\qquad +\int _0^t\int _{\mathbb {R}}E\left[ \left| \int _{\mathbb {R}}\left[ R_\varepsilon (t-s,x,z)-G(t-s,x,z)\right] a(X(s,y))\rho _\varepsilon (y-z)dz\right| ^2dyds\right] \\&\qquad +\int _0^t\int _{\mathbb {R}}E\left[ \left| \int _{\mathbb {R}}\left[ G(t-s,x,z)-G(t-s,x,y)\right] a(X(s,y))\rho _\varepsilon (y-z)dz\right| ^2dyds\right] \Biggl \}\\&\quad =:C\sum _{i=1}^6I_n(t,x,\varepsilon ). \end{aligned}$$

By using Lemma 3.2 and boundness of f(x)

$$\begin{aligned} I_1(t,x,\varepsilon )= & {} C\left| \int _{\mathbb {R}}R_\varepsilon (t,x,y)f(y)dy-\int _{\mathbb {R}}G(t,x,y)f(y)dy\right| ^2\\\le & {} C\left( \int _{\mathbb {R}}|R_\varepsilon (t,x,y)-G(t,x,y)|dy\right) ^2\\\le & {} C\left( e^\frac{-2t}{\varepsilon }+(\varepsilon /t)^{\frac{2}{3}}\right) . \end{aligned}$$

As for \(I_2(t,x,\varepsilon )\)

$$\begin{aligned} I_2(t,x,\varepsilon )\le C\int _0^te^{-2(t-s)/\varepsilon }E[X_\varepsilon (s,x)^2]ds. \end{aligned}$$

This inequality and (3.9) imply that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\sup _{0\le t\le T}\sup _{x\in {\mathbb {R}}}I_2(t,x,\varepsilon )=0. \end{aligned}$$

By the Hölder’s inequality, the Lipschitz continuty of function a(x) and Lemm 3.2, we have

$$\begin{aligned} I_3(t,x,\varepsilon )\le & {} C\int _0^t\int _{\mathbb {R}}E\left( \int _{\mathbb {R}}R_\varepsilon (t-s,x,z)|X_\varepsilon (s,z)-X(s,z)|\rho _\varepsilon (y-z)dz\right) ^2dyds\\\le & {} C\int _0^t\int _{\mathbb {R}}E\left( \int _{\mathbb {R}}R_\varepsilon (t-s,x,z)^2|X_\varepsilon (s,z)-X(s,z)|^2\rho _\varepsilon (y-z)dz\right) dyds\\\le & {} C\int _0^t(t-s)^{-\frac{1}{\alpha }}\sup _{y\in {\mathbb {R}}}E\left[ \left| X_\varepsilon (s,y)-X(s,y)\right| ^2\right] ds, \end{aligned}$$

and similarly

$$\begin{aligned} I_4(t,x,\varepsilon )\le C\int _0^t\int _{\mathbb {R}}\int _{\mathbb {R}}G(t-s,x,z)^2E\left[ \left| X(s,z)-X(s,y)\right| ^2\right] \rho _\varepsilon (y-z) dydzds. \end{aligned}$$

According to the inequality

$$\begin{aligned} E[|X(s,z)-X(s,y)|^2\le C\left\{ s^{-1/\alpha }|y-z|+|y-z|^{\alpha -1}\right\} , \end{aligned}$$

the definition of \(\rho _\varepsilon \) gives that

$$\begin{aligned} I_4(t,x,\varepsilon )\le & {} C\int _0^t\int _{\mathbb {R}}\int _{\mathbb {R}}G(t-s,x,z)^2[s^{-1/\alpha }|y-z|+|y-z|^{\alpha -1}]\rho _\varepsilon (y-z) dydzds\\\le & {} C\int _0^t\int _{\mathbb {R}}G(t-s,x,y)^2\left[ s^{-1/\alpha }\varepsilon +\varepsilon ^{\alpha -1}\right] \\\le & {} C\int _0^t(t-s)^{-1/\alpha }[s^{-1/\alpha }\varepsilon +\varepsilon ^{\alpha -1}]\le C(t^{1-2/\alpha }\varepsilon +\varepsilon ^{\alpha -1}). \end{aligned}$$

By Hölder’s inequality and (3.10),

$$\begin{aligned} I_5(t,x,\varepsilon )\le C\int _0^t\int _{\mathbb {R}}\int _{\mathbb {R}}[|R_\varepsilon (t-s,x,z)-G(t-s,x,z)|^2\rho _\varepsilon (y-z)dz]dyds, \end{aligned}$$

and applying Lemma 3.2 we obtain

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\sup _{0\le t\le T}\sup _{x\in {\mathbb {R}}}I_5(t,x,\varepsilon )=0. \end{aligned}$$

In a same way, we have

$$\begin{aligned} I_6(t,x,\varepsilon )\le & {} C\int _0^t\int _{\mathbb {R}}\int _{\mathbb {R}}[|G(t-s,x,z)-G(t-s,x,y)|^2\rho _\varepsilon (y-z)dz]dyds\\= & {} C\int _0^t\int _{\mathbb {R}}\Biggl [G(t-s,x,y)^2-\int _{\mathbb {R}}\{2G(t-s,x,y)G(t-s,x,z)\rho _\varepsilon (y-z)\\&+G(t-s,x,z)^2\rho _\varepsilon (y-z)\}dz\Biggl ]dyds. \end{aligned}$$

Applying the fact

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\int _{\mathbb {R}}G(t-s,x,z)\rho _\varepsilon (y-z)dz= G(t-s,x,y) ~~~\mathrm{uniformly~ for }~y\in {\mathbb {R}}, \end{aligned}$$

we can get

$$\begin{aligned} \lim _{\varepsilon \rightarrow \ 0}\sup _{0\le t\le T}\sup _{x\in {\mathbb {R}}}I_6(t,x,\varepsilon )=0. \end{aligned}$$

Now, we set

$$\begin{aligned} M(t,\varepsilon )=\sup _{x\in {\mathbb {R}}}\Vert X_\varepsilon (t,x)-X(t,x)\Vert _2^2. \end{aligned}$$

Then there exists some constant \(C>0\) such that

$$\begin{aligned} M(t,\varepsilon )\le C\int _0^t(t-s)^{-1/\alpha }M(s,\varepsilon )+H(T,\varepsilon )+{\hat{H}}(t,\varepsilon ), \end{aligned}$$

where

$$\begin{aligned}&H(T,\varepsilon )=C\sum _{n=2,5,6}\sup _{0\le t\le T}\sup _{x\in {\mathbb {R}}}I_n(t,x,\varepsilon ),\\&{\hat{H}}(t,x,\varepsilon )=C\left\{ (e^{-t/\varepsilon }+(\varepsilon /t)^{1/2})+(t^{1-2/\alpha }+\varepsilon ^{\alpha -1})\right\} . \end{aligned}$$

Therefore, Grownwall’s inequality implies that

$$\begin{aligned} M(t,\varepsilon )\rightarrow 0~~~\mathrm{as}~\varepsilon \rightarrow 0, \end{aligned}$$

and thus completes the proof of Theorem 3.3. \(\square \)

3.3 Polynomial decay

In this section, we show that the solution of (3.1) has modification in the class \(C_\rho ({\mathbb {R}})\). The following lemma is a variant of a Kolmogorov’s continuity criterion theorem.

Lemma 3.3

  1. (i)

    Suppose that for every \(0<\rho <(\alpha +1)/2\) there exist \(p>0,\gamma >2\) and \(C_\rho >0\) such that

    $$\begin{aligned} E[|X(t,x)-X(t',x')|^p]\le C_\rho (|t-t'|^\gamma +|x-x'|^\gamma )\lambda ^{-\rho }(x), \end{aligned}$$

    for \(0\le t,t'\le 1\) and \(x,x'\in {\mathbb {R}}\) with \(|x-x'|\le 1\). Then \(X(t,\cdot )\) has \(C_{\rho }\)-valued continuous version P-a.s.

  2. (ii)

    Let \(\{X_n(t,\cdot );~t\ge 0,n\in {\mathbb {N}}\}\) be a sequence of continuous \(C_\rho \)-valued processes. Suppose that for every \(0<\rho <(\alpha +1)/2\) and \(T>0\) there exists \(p>0,\gamma >2\) and \(C_\rho >0\) such that

    $$\begin{aligned} E[|X_n(t,x)-X_n(t',x')|^p]\le C_\rho (|t-t'|^\gamma +|x-x'|^\gamma )\lambda ^{-\rho }(x), \end{aligned}$$

    for \(t,t'\in [0,T]\) and \(x,x'\in {\mathbb {R}}\) with \(|x-x'|\le 1\). Then the sequence of probability distributions on \(C([0,\infty );C_\rho ({\mathbb {R}}))\) induced by \(X_n(\cdot )\) is tight.

Theorem 3.4

Under the conditions of Lemma 3.1, \(X(t,\cdot )\) has \(C_\rho ({\mathbb {R}})\) valued continuous version P-a.s.

Proof

Since \(f\in C_\rho ({\mathbb {R}})\), it is enough to prove that for all \(0<\rho <(\alpha +1)/2\) there exist \(p>0\), \(\gamma >2\) and \(C_\rho >0\) such that

$$\begin{aligned} E[|X(t,x)-X(t',x')|^p]\le C(|t-t'|^\gamma +|x-x'|^\gamma )\lambda ^{-\rho }(x), \end{aligned}$$
(3.11)

for \(X(t,x)=\int _0^t\int _{\mathbb {R}}G(t-s,x,y)a(X(s,y))W(dy,ds)\), \(0\le t,t'\le 1\) and \(x,x'\in {\mathbb {R}}\) with \(|x-x'|\le 1\). We first show (3.11) with \(t=t'\). From the Burkholder’s inequality and the Hölder’s inequality, we have for every \(p=2^m,~m\in {\mathbb {N}}\),

$$\begin{aligned}&E[|X(t,x)-X(t,x')|^p]\\&\quad \le C_pE\left[ \int _0^t\int _{\mathbb {R}}(G(t-s,x,y)-G(t-s,x',y))^2a(X(s,y))^2dyds\right] ^{\frac{p}{2}}\\&\quad \le C_pE\left( \int _0^t\int _{\mathbb {R}}\left| G(t-s,x,y)-G(t-s,x',y)\right| ^2a(X(s,y))^p\lambda ^{\rho (\frac{p}{2}-1)}(y)dyds\right) \\&\qquad \times \left( \int _0^t\int _{\mathbb {R}}\left| G(t-s,x,y)-G(t-s,x',y)\right| ^2\lambda ^{-\rho }(y)dyds\right) ^{\frac{p-2}{2}}, \end{aligned}$$

where \(\rho < \frac{\alpha +1}{p-2}\). Lemma 3.1 implies that

$$\begin{aligned} E\left( \int _0^t\int _{\mathbb {R}}\left| G(t-s,x,y)-G(t-s,x',y)\right| ^2a(X(s,y))^p\lambda ^{\rho (\frac{p}{2}-1)}(y)dyds\right) <\infty . \end{aligned}$$

From Lemma 2.1

$$\begin{aligned} \left| G(t,x,y)-G(t,x',y)\right|\le & {} C|x-x'|G_x(t,x(\theta ),y)\\= & {} C|x-x'|t^{\frac{-2}{\alpha }}G_x(1,(x(\theta )-y)t^{\frac{-1}{\alpha }})\\\le & {} C|x-x'|t^{\frac{-2}{\alpha }}, \end{aligned}$$

where \(x(\theta )=x+(1-\theta )x'\) for \(0<\theta <1\). Therefore, for every \(\kappa <\alpha -1\)

$$\begin{aligned}&\int _0^t\int _{\mathbb {R}}\left| G(t-s,x,y)-G(t-s,x',y)\right| ^{2-\kappa +\kappa }\lambda ^{-\rho }(y)dyds \\&\quad \le C|x-x'|^\kappa \int _0^t(t-s)^{-\frac{2}{\alpha }\kappa }\\&\qquad \int _{\mathbb {R}}\left( |G(t-s,x,y)|^{2-\kappa }+|G(t-s,x',y)|^{2-\kappa }\right) \lambda ^{-\rho }(y)dyds. \end{aligned}$$

Note that,

$$\begin{aligned} \int _{\mathbb {R}}G(t,x,y)^{2-\kappa }\lambda ^{-\rho }(y)dy\le Ct^{-\frac{1-\kappa }{\alpha }}\lambda ^{-\rho }(x), \end{aligned}$$

so we have

$$\begin{aligned}&\int _0^t\int _{\mathbb {R}}\left| G(t-s,x,y)-G(t-s,x',y)\right| ^{2-\kappa +\kappa }\lambda ^{-\rho }(y)dyds\\&\quad \le C|x-x'|^\kappa \int _0^t(t-s)^{-\frac{1+\kappa }{\alpha }}ds\lambda ^{-\rho }(x). \end{aligned}$$

Choosing \(p=2^m\) satisfying that \(\frac{p-2}{2}\kappa >2\), we can get (3.11) with \(t=t'\). Next, we prove (3.11) with \(x=x'\). In the same way as above, for \(0\le t\le t'\le T\), we can show that

$$\begin{aligned}&E[|X(t,x)-X(t',x)|^p] \\&\quad \le C_p\Biggl \{E\left[ \int _0^t\int _{\mathbb {R}}(G(t'-s,x,y)-G(t-s,x,y))^2a(X(s,y))^2dyds\right] ^{\frac{p}{2}}\\&\qquad +E\left[ \int _t^{t'}\int _{\mathbb {R}}G(t'-s,x,y)^2a(X(s,y))^2dyds\right] ^{\frac{p}{2}}\Biggl \}\\&\quad \le C_p\Biggl \{E\Biggl (\int _t^{t'}\int _{\mathbb {R}}G(t'-s,x,y)^2a(X(s,y))^p\lambda ^{\rho (\frac{p}{2}-1)}(y)dyds\Biggl )\\&\qquad \times \left( \int _t^{t'}\int _{\mathbb {R}}\left| G(t'-s,x,y)\right| ^2\lambda ^{-\rho }(y)dyds\right) ^{\frac{p-2}{2}}\\&\qquad +E\Biggl (\int _0^t\int _{\mathbb {R}}\left| G(t'-s,x,y)-G(t-s,x,y)\right| ^2a(X(s,y))^p\lambda ^{\rho (\frac{p}{2}-1)}(y)dyds\Biggl )\\&\qquad \times \left( \int _0^t\int _{\mathbb {R}}\left| G(t'-s,x,y)-G(t-s,x,y)\right| ^2\lambda ^{-\rho }(y)dyds\right) ^{\frac{p-2}{2}}\Biggl \}, \end{aligned}$$

for \(\rho <\frac{\alpha +1}{p-2}\). Immediately, we can get from Lemma 2.1

$$\begin{aligned} \int _t^{t'}\int _{\mathbb {R}}\left| G(t'-s,x,y)\right| ^2\lambda ^{-\rho }(y)dyds\le & {} C\int _t^{t'}(t'-s)^{-1/\alpha }\int _{\mathbb {R}}G(t-s,x,y)\lambda ^{-\rho }(y)dy\\\le & {} C(t-t')^{1-\frac{1}{\alpha }}\lambda ^{-\rho }(x). \end{aligned}$$

Hence combining with (3.5)

$$\begin{aligned}&C_pE\Biggl (\int _t^{t'}\int _{\mathbb {R}}G(t'-s,x,y)^2a(X(s,y))^p\lambda ^{-\rho }(y)dyds\Biggl )\\&\qquad \times \left( \int _t^{t'}\int _{\mathbb {R}}\left| G(t'-s,x,y)\right| ^2\lambda ^{-\rho }(y)dyds\right) ^{\frac{p-2}{2}}\\&\quad \le C_{p,T,\alpha }(t-t')^{(1-\frac{1}{\alpha })\frac{p-2}{2}}\lambda ^{-\rho \frac{p-2}{2}}(x). \end{aligned}$$

From now on, we claim that

$$\begin{aligned} \int _0^t\int _{\mathbb {R}}\left| G(t'-s,x,y)-G(t-s,x,y)\right| ^2\lambda ^{-\rho }(y)dyds\le (t-t')^{(1-\frac{1}{\alpha })}\lambda ^{-\rho }(x). \end{aligned}$$

The change of variable \(s=\theta v\) with \(\theta =t'-t\),

$$\begin{aligned}&\int _0^t\int _{\mathbb {R}}\left| G(t'-t+s,x,y)-G(s,x,y)\right| ^2\lambda ^{-\rho }(y)dyds\\&\quad =\int _0^{t/\theta }\int _{\mathbb {R}}|G(\theta (v+1),x,y)-G(\theta v,x,y)|^2\lambda ^{-\rho }(y)\theta dydv\\&\quad \le \int _0^{t/\theta }\int _{\mathbb {R}}\theta ^{1-\frac{2}{\alpha }}|G(v+1,\theta ^{-\frac{1}{\alpha }}(x-y))\\&\qquad -G(v,\theta ^{-\frac{1}{\alpha }}(x-y))|^2\lambda ^{-\rho }(y)dydv. \end{aligned}$$

Note that,

$$\begin{aligned} \lambda (y)=(1+|y|^2)^{1/2}=\left( 1+\left| \theta ^{\frac{1}{\alpha }}\theta ^{-\frac{1}{\alpha }}y\right| ^2\right) ^{1/2}\le C\left( 1+\left| \theta ^{-\frac{1}{\alpha }}y\right| ^2\right) ^{1/2}. \end{aligned}$$

Again, by the change of variable \(z=\theta ^{-\frac{1}{\alpha }}(x-y)\), we have

$$\begin{aligned}&\int _0^t\int _{\mathbb {R}}\left| G(t'-s,x,y)-G(t-s,x,y)\right| ^2\lambda ^{-\rho }(y)dydv\\&\quad \le \theta ^{1-\frac{1}{\alpha }}\lambda ^{-\rho }(x)\int _0^{t/\theta }\int _{\mathbb {R}}|G(v+1,z)-G(v,z)|^2\lambda ^{\rho }(z)dzdv. \end{aligned}$$

Therefore, to prove (3.11) we need to show that

$$\begin{aligned} \int _0^\infty \int _{\mathbb {R}}|G(v+1,z)-G(v,z)|^2\lambda ^{\rho }(z)dzdv<\infty . \end{aligned}$$

Let us write

$$\begin{aligned}&\int _0^\infty \int _{\mathbb {R}}|G(v+1,z)-G(v,z)|^2\lambda ^{\rho }(z)dzdv\\&\quad =\int _0^1\int _{\mathbb {R}}|G(v+1,z)-G(v,z)|^2\lambda ^{\rho }(z)dzdv\\&\qquad +\int _1^\infty \int _{\mathbb {R}}|G(v+1,z)-G(v,z)|^2\lambda ^{\rho }(z)dzdv. \end{aligned}$$

The first integral is finite, since

$$\begin{aligned}&\int _0^1\int _{\mathbb {R}}|G(v+1,z)-G(v,z)|^2\lambda ^{\rho }(z)dzdv\\&\quad \le C\int _0^1\int _{\mathbb {R}}v^{-\frac{1}{\alpha }}|G(v,z)|\lambda ^{\rho }(z)dzdv<\infty . \end{aligned}$$

For the second term one, we use Lemma 2.1 and the change of variable \(z=(1+v)^{1/\alpha }z'\)

$$\begin{aligned}&\int _1^\infty \int _{\mathbb {R}}|G(v+1,z)-G(v,z)|^2\lambda ^{\rho }(z)dzdv\\&\quad =\int _1^\infty \int _{\mathbb {R}}|G(1,(1+v)^{-\frac{1}{\alpha }}z)\times (1+v)^{-\frac{1}{\alpha }}-G(1,v^{-\frac{1}{\alpha }}z)\times v^{-\frac{1}{\alpha }}|^2\lambda ^\rho (z)dzdv\\&\quad =\int _1^\infty \int _{\mathbb {R}} v^{-\frac{2}{\alpha }}\left| G(1,z')\times \left( \frac{v}{1+v}\right) ^{\frac{1}{\alpha }}-G\left( 1,\left( \frac{1+v}{v}\right) ^{\frac{1}{\alpha }}z'\right) \right| ^2(1+v)^{\frac{1}{\alpha }}\lambda ^\rho (z')dz'dv. \end{aligned}$$

Further,

$$\begin{aligned}&\left| G(1,z')\times \left( \frac{v}{1+v}\right) ^{\frac{1}{\alpha }}-G\left( 1,\left( \frac{1+v}{v}\right) ^{\frac{1}{\alpha }}z'\right) \right| ^2\\&\quad \le C\left[ \left| G(1,z')-G\left( 1,\left( \frac{1+v}{v}\right) ^{\frac{1}{\alpha }}z'\right) \right| ^2+\left| 1-\left( \frac{v}{1+v}\right) ^\frac{1}{\alpha }\right| ^2\left| G(1,z')\right| ^2\right] , \end{aligned}$$

and by Lemma 2.1

$$\begin{aligned}&\int _{-\infty }^\infty \left| G(1,z)-G\left( 1,\left( \frac{1+v}{v}\right) ^{\frac{1}{\alpha }}z\right) \right| ^2\lambda ^\rho (z)dz\\&\quad =\int _{-\infty }^{\infty }\left| \int _{\left( \frac{1+v}{v}\right) ^\frac{1}{\alpha }z}^zG_\xi (1,\xi )d\xi \right| ^2\lambda ^\rho (z)dz\\&\quad \le C\Biggl [\int _{-\infty }^{-1}\left| \int _{\left( \frac{1+v}{v}\right) ^\frac{1}{\alpha }z}^z\frac{1+|\xi |^\alpha }{\left( 1+|\xi |^{\alpha +1}\right) ^2}d\xi \right| ^2\lambda ^\rho (z)dz\\&\qquad +\int _{-1}^{0}\left| \int _{\left( \frac{1+v}{v}\right) ^\frac{1}{\alpha }z}^z\frac{1+|\xi |^\alpha }{\left( 1+|\xi |^{\alpha +1}\right) ^2}d\xi \right| ^2\lambda ^\rho (z)dz\\&\qquad +\int _{0}^{1}\left| \int ^{\left( \frac{1+v}{v}\right) ^\frac{1}{\alpha }z}_z\frac{1+|\xi |^\alpha }{\left( 1+|\xi |^{\alpha +1}\right) ^2}d\xi \right| ^2\lambda ^\rho (z)dz\\&\qquad +\int _{1}^{\infty }\left| \int ^{\left( \frac{1+v}{v}\right) ^\frac{1}{\alpha }z}_z\frac{1+|\xi |^\alpha }{\left( 1+|\xi |^{\alpha +1}\right) ^2}d\xi \right| ^2\lambda ^\rho (z)dz\Biggl ]. \end{aligned}$$

Notice that,

$$\begin{aligned} \int _{\left( \frac{1+v}{v}\right) ^\frac{1}{\alpha }z}^z\frac{1+|\xi |^\alpha }{\left( 1+|\xi |^{\alpha +1}\right) ^2}d\xi \le \frac{C}{\left( 1+|z|^{\alpha +1}\right) ^2}\int _{\left( \frac{1+v}{v}\right) ^\frac{1}{\alpha }z}^z|\xi |^\alpha d\xi \quad&\mathrm{on~}z\in (-\infty ,-1)\\ \int ^{\left( \frac{1+v}{v}\right) ^\frac{1 }{\alpha }z}_z\frac{1+|\xi |^\alpha }{\left( 1+|\xi |^{\alpha +1}\right) ^2}d\xi \le \frac{C}{\left( 1+|z|^{\alpha +1}\right) ^2}\int ^{\left( \frac{1+v}{v}\right) ^\frac{1}{\alpha }z}_z|\xi |^\alpha d\xi \quad&\mathrm{on~}z\in (1,\infty ). \end{aligned}$$

Thus we can estimate

$$\begin{aligned}&\int _{-\infty }^\infty \left| G(1,z)-G\left( 1,\left( \frac{1+v}{v}\right) ^{\frac{1}{\alpha }}z\right) \right| ^2\lambda ^\rho (z)dz\\&\quad \le C\left( 1+\int _1^\infty \left| \frac{z^{\alpha +1}}{(1+z^{\alpha +1})^2}\right| ^2\lambda ^\rho (z)dz\right) \left| 1-\left( \frac{v+1}{v}\right) ^\frac{\alpha +1}{\alpha }\right| ^2. \end{aligned}$$

Since \(0<\rho <(\alpha +1)/2\), we have

$$\begin{aligned}&\int _{\mathbb {R}}\left| G(1,z)\times \left( \frac{v}{1+v}\right) ^{\frac{1}{\alpha }}-G\left( 1,\left( \frac{1+v}{v}\right) ^{\frac{1}{\alpha }}z\right) \right| ^2\lambda ^\rho (z)dz\\&\quad \le C\left( \left| 1-\left( \frac{v+1}{v}\right) ^\frac{\alpha +1}{\alpha }\right| ^2+\left| 1-\left( \frac{v+1}{v}\right) ^\frac{1}{\alpha }\right| ^2\right) . \end{aligned}$$

By the mean value theorem, we can get

$$\begin{aligned} \left| 1-\left( \frac{v+1}{v}\right) ^\frac{\alpha +1}{\alpha }\right| ^2+\left| 1-\left( \frac{v+1}{v}\right) ^\frac{1}{\alpha }\right| ^2\le Cv^{-2}. \end{aligned}$$

Hence

$$\begin{aligned}&\int _1^\infty \int _{\mathbb {R}}|G(v+1,z)-G(v,z)|^2\lambda ^{\rho }(z)dzdv\\&\quad \le C\int _1^\infty v^{\frac{-2(\alpha +1)}{\alpha }}(1+v)^{1/\alpha }dv. \end{aligned}$$

Since \(-2(\alpha +1)/\alpha +1/\alpha <-1\), the last integral is finite and thus completes the proof of Theorem 3.4. \(\square \)

4 Existence in non-Lipschitz case

In this section, we consider SPDE (1.1). We prove the existence of solutions by using the results in the previous section.

Theorem 4.1

Let \(f\in C_\rho ^+({\mathbb {R}})\) be an initial function. Then for every \(T>0\), there exist an \(\{{\mathcal {F}}_t\}\)-space-time Gaussian white noise \({\dot{W}}(t,x)\) and \(C([0,T];C_\rho ^+({\mathbb {R}}))\) valued solutions X to (2.3) with \(X(0)=f\) on a suitable probability space with filtration \((\Omega ,{\mathcal {F}},P,\{{\mathcal {F}}_t\}).\)

Proof

Let \(a_n(u)\) be a sequence of Lipschitz functions such that

$$\begin{aligned} a_n(u)= \left\{ \begin{aligned} n^{1-\gamma }|u| \quad&\mathrm{if~}|u|<\frac{1}{n},\\ |u|^\gamma ~~~~~&\mathrm{if}~|u|\ge \frac{1}{n}. \end{aligned}\right. \end{aligned}$$
(4.1)

The sequence \(a_n(u)\) converge to \(|u|^\gamma \) uniformly in \(u\in {\mathbb {R}}\) as \(n\rightarrow \infty \). Then by Theorem 3.13.3 and 3.4 for every \(0<\rho <(\alpha +1)/2\), there exist the unique \(C_\rho ^+({\mathbb {R}})\)- valued solution \(X_n\) to (3.1) with \(a_n(u)\) for each \(n\ge 1\).

The solution \(X_n\) holds the moment condition (3.11), and it follows from Lemma 3.3 that the family of probability distributions on \(C\left( [0,T];C_\rho ^+({\mathbb {R}})\right) \) induced by \(\left\{ X_n\right\} \) is tight. This means that there exists a subsequence \(\left\{ n_k\right\} _{k\in {\mathbb {N}}}\) and a random field \(X\in C\left( [0,T];C_\rho ^+({\mathbb {R}})\right) \) such that

$$\begin{aligned} X_{n_k} \Rightarrow X \quad \text {on} \ C \left( [0,T]; C_\rho ^{+}({\mathbb {R}}) \right) . \end{aligned}$$

By Skorohod representation theorem (cf. [5]), we can find some random fields \(Y_n,Y\in C\left( [0,T];C_\rho ^+({\mathbb {R}})\right) \) on some probability space \(({\bar{\Omega }},\bar{{\mathcal {F}}},(\bar{{\mathcal {F}}}_t)_{0\le t\le T},{\bar{P}})\), such that

$$\begin{aligned} Y_n\rightarrow Y~~~{\bar{P}}-a.s.~\mathrm{on}~C\left( [0,T];C_\rho ^+({\mathbb {R}})\right) , \end{aligned}$$

and

$$\begin{aligned} X_n=Y_n~~~\mathrm{in~law},~~~~~~X=Y~~~\mathrm{in~law}. \end{aligned}$$

Then we can get for every \(\phi \in {\mathcal {D}}((-\Delta )^{\alpha /2})\)

$$\begin{aligned} M_\phi ^n(t):= & {} \int _0^t\int _{\mathbb {R}}\phi (y)a_n(X_n(s,y))W(dy,ds)\\= & {} \int _{\mathbb {R}}X_n(t,y)\phi (y)dy+\int _0^t\int _{\mathbb {R}}(-\Delta )^{\alpha /2}\phi (y)X_n(s,y)dyds\\&\quad -\int _{\mathbb {R}}\phi (y)f(y)dy\\&\overset{\mathrm {in~law}}{=}\int _{\mathbb {R}}Y_n(t,y)\phi (y)dy+\int _0^t\int _{\mathbb {R}}(-\Delta )^{\alpha /2}\phi (y)Y_n(s,y)dyds\\&\quad -\int _{\mathbb {R}}\phi (y)f(y)dy\\\rightarrow & {} \int _{\mathbb {R}}X(t,y)\phi (y)dy+\int _0^t\int _{\mathbb {R}}(-\Delta )^{\alpha /2}\phi (x)X(s,y)dyds\\&\qquad -\int _{\mathbb {R}}\phi (y)f(y)dy \end{aligned}$$

Note that, by (3.7)

$$\begin{aligned} \sup _{0\le t\le T}\sup _{n\in {\mathbb {N}}}E[|M_\phi ^n(t)|^2]<\infty . \end{aligned}$$

Hence, \((M_\phi ^n)_{n\in {\mathbb {N}}}\) is a sequence of uniformly integrable martingales, and therefore, there exists a martingale \(M_\phi \) such that

$$\begin{aligned} M_\phi ^n(\cdot )\Rightarrow M_\phi (\cdot ), \end{aligned}$$

and

$$\begin{aligned} M_\phi (t)=\int _{\mathbb {R}}X(t,y)\phi (y)dy+\int _0^t\int _{\mathbb {R}}(-\Delta )^{\alpha /2}\phi (y)X(s,y)dyds-\int _{\mathbb {R}}\phi (y)f(y)dy. \end{aligned}$$

It follows from (4.1) we can get as \(n\rightarrow \infty \),

$$\begin{aligned} \langle M_\phi ^n,M_\phi ^n \rangle _t\rightarrow \int _0^t\int _{\mathbb {R}}\phi ^2(y)|X(s,y)|^{2\gamma }dyds, \end{aligned}$$

and so that

$$\begin{aligned} \langle M_\phi ,M_\phi \rangle _t=\int _0^t\int _{\mathbb {R}}\phi ^2(y)|X(s,y)|^{2\gamma }dyds. \end{aligned}$$

This means that there corresponds a martingale measure M(dxdt) with the quadratic measure

$$\begin{aligned} Q(dt,dy)=|X(t,y)|^{2\gamma }dydt. \end{aligned}$$

Take an \({\mathcal {S}}'({\mathbb {R}})\)-valued standard Wiener process \({\bar{W}}_t\) independent of \(X_s(dx)\). We set

$$\begin{aligned} W_t(\phi )&=\int _0^t\int _{\mathbb {R}}\frac{1}{|X(s,y)|^{\gamma }}1_{\{X(s,y)\ne 0\}}\phi (y)M(dy,ds)\\&\quad +\int _0^t\int _{\mathbb {R}}1_{\{X(s,y)=0\}}\phi (y){\bar{W}}(dy,ds). \end{aligned}$$

From the definition of M and W we can show that

$$\begin{aligned}&\int _0^t\int _{\mathbb {R}}\phi (y)|X(s,y)|^\gamma W(dy,ds)=\int _0^t\int _{\mathbb {R}}\phi (y)1_{\{X(s,y)\ne 0\}}M(dy,ds)\\&\quad =\int _0^t\int _{\mathbb {R}}\phi (y)M(dy,ds)-\int _0^t\int _{\mathbb {R}}\phi (y)1_{\{X(s,y)=0\}}M(dy,ds). \end{aligned}$$

Since the last term equals to 0 a.s., we have

$$\begin{aligned} M_\phi (t)=\int _0^t\int _{\mathbb {R}}|X(s,y)|^\gamma \phi (y)W(dy,ds). \end{aligned}$$

Thus we complete the proof of Theorem 4.1. \(\square \)