1 Introduction

Our aim is to study the longtime dynamics of stochastic evolution equations using an approach that is different from the classical one. Namely, instead of transforming the SPDE into a random PDE, we work with solutions that are defined pathwise, see [5, 26]. We consider parabolic problems with random differential operators and use a pathwise representation formula to show that the solution operator generates a random dynamical system and to prove that it possesses random attractors of finite fractal dimension.

In particular, let X be a separable Banach space and let \((\Omega , \mathcal {F}, \mathbb {P})\) be a complete probability space with filtration \((\mathcal {F}_t)_{t\in \mathbb {R}}\). We consider stochastic parabolic evolution equations of the form

$$\begin{aligned} {\text {d}}u(t)&=(A(t,\omega )u(t)+F(u(t))){\text {d}}t+\sigma {\text {d}}W(t), \end{aligned}$$
(1.1)

where \((A(t,\omega ))_{t\in \mathbb {R},\omega \in \Omega }\) is a measurable, adapted family of sectorial operators in X depending on time and the underlying probability space. Moreover, F is the nonlinearity, \(\sigma >0\) indicates the noise intensity, and \((W(t))_{t\ge 0}\) denotes an X-valued Brownian motion.

The common approach to show the existence of random attractors is to introduce a suitable change of variables that transforms the SPDE into a family of PDEs with random coefficients. The resulting random PDEs can be studied by deterministic techniques. This method has been applied to a large variety of PDEs, mainly for equations perturbed by additive noise or by a particular linear multiplicative noise, e.g., see [4, 16, 17, 28, 31] and the references therein. However, for more general situations such a change of variables is not always known or cannot be performed. In [15], using the theory of strongly monotone operators, a strictly stationary solution of the equation

$$\begin{aligned} {\text {d}}u(t) = A(t,\omega , u(t))~{\text {d}}t +\sigma {\text {d}}W(t) \end{aligned}$$

was constructed. This allows to transform SPDEs of the form (1.1) into a family of random PDEs. Using this ansatz, the existence of random attractors was shown in [15] for a class of SPDEs including equations such as (1.1).

Here, we follow a different approach. We aim to use the notion of pathwise mild solutions introduced by Pronk and Veraar in [26] to establish the existence of global and exponential random attractors for (1.1). So far, only few results concerning the existence of exponential attractors for SPDE have been obtained. In particular, let \((U(t,s,\omega ))_{t\ge s,\omega \in \Omega }\) be the stochastic evolution system generated by the family \((A(t,\omega ))_{t\in \mathbb {R},\omega \in \Omega }\), then the pathwise mild solution of (1.1) is defined as

$$\begin{aligned} u(t)&=\ U (t,0) u_{0} + \sigma U(t,0) W(t) + \int \limits _{0}^{t} U(t,s)F(u(s))~{\text {d}}s\\&\quad -\,\sigma \int \limits _{0}^{t}U(t,s)A(s) (W(t) -W(s)) ~{\text {d}}s, \end{aligned}$$

where, for simplicity, we omit to write the dependency of A and U on \(\omega \). This formula is motivated by formally applying integration by parts for the stochastic integral, and it, indeed, yields a pathwise representation for the solution, see [26]. Instead, if one directly used the classical mild formulation of SPDEs to define a solution, the resulting stochastic integral would not be well defined (see Sect. 2.3).

Our aim is to show that problem (1.1) generates a random dynamical system using the concept of pathwise mild solutions and to prove the existence of random attractors. We will not only consider (global) random attractors, but also show that random exponential attractors exist. In particular, the existence of random exponential attractors immediately implies the existence and finite fractal dimension of the (global) random attractor. To this end, we employ a general existence result for random exponential attractors in [7] which turns out to be easily applicable in our setting.

Stochasticity plays an important role in many real world applications. Complex systems in physics, engineering or biology can be described by PDEs with coefficients that depend on stochastic processes. These random terms quantify the lack of knowledge of certain parameters in the equation or reflect external fluctuations. Problem (1.1) is a semilinear parabolic problem where the coefficients of the differential operators \((A(t,\omega ))_{t\in \mathbb {R},\omega \in \Omega }\) depend on a stochastic process with suitable properties, and the equation is perturbed by additive noise. A related but simpler setting is random parabolic equations of the form

$$\begin{aligned} {\text {d}}u(t) =( A(t,\omega ) u(t)+F(t,\omega ,u(t))){\text {d}}t. \end{aligned}$$
(1.2)

The longtime behavior of such random evolution equations has been investigated using the random dynamical system approach in [6, 20, 21, 27]. To this end, the following structure of the random generators was assumed,

$$\begin{aligned} A(t,\omega ):=A(\theta _t\omega )\qquad \forall t\in \mathbb {R}, \omega \in \Omega , \end{aligned}$$

where \((\Omega ,\mathcal {F}, \mathbb {P}, (\theta _t)_{t\in \mathbb {R}} )\) is an ergodic metric dynamical system. In this context, results concerning invariant manifolds [6, 20], principal Lyapunov exponents [21] and the stability of equilibria [27] have been obtained. Random evolution equations of the form (1.2) arise in several applications. For instance, setting \(A(\theta _t\omega )u:=\Delta u + a(\theta _t\omega ) u\) and \(F(t,\omega ,u):=-a(\theta _t\omega )u^2\), with a suitable measurable function \(a:\Omega \rightarrow (0,\infty )\), we recover a random version of the Fisher–KPP equation,

$$\begin{aligned} {\text {d}}u(t) = [(\Delta + a(\theta _t\omega )) u(t) - a(\theta _t\omega )u^2(t) ]{\text {d}}t, \end{aligned}$$

which was analyzed in [27].

In the present work, we consider equations of the form (1.1), i.e., we perturb a semilinear nonautonomous random parabolic equation by an infinite-dimensional Brownian motion and investigate the existence of random attractors.

The outline of our paper is as follows. In Sect. 2, we collect basic notions and results from the theory of random dynamical systems and nonautonomous stochastic evolution equations and recall an existence result for random exponential attractors. In Sect. 3, we formulate and prove our main results. First, we show that under suitable assumptions on A, F and W, the solution operator corresponding to (1.1) generates a random dynamical system. Then, we establish the existence of an absorbing set and verify the so-called smoothing property of the random dynamical system. These properties allow us to conclude the existence of random exponential attractors in Theorem 3.8 and to derive upper bounds for their fractal dimension. As a consequence, the (global) random attractor exists and its fractal dimension is finite. In Sect. 4, we provide explicit examples of nonautonomous random differential operators satisfying our hypotheses and point out potential applications of our results.

Our paper provides a first, simple example that illustrates how the concept of pathwise mild solutions can be used to show the existence of global and exponential random attractors for SPDEs with random differential operators. Numerous extensions are imaginable. In particular, in future works we plan to relax the assumptions on the nonlinear term F and to consider Problem (1.1) with multiplicative noise. Another interesting aspect would be to investigate the regularity of random attractors.

2 Preliminaries

We first collect some basic notions and results from the theory of random dynamical systems, which are mainly taken from [3, 10, 28]. Then, in Sect. 2.2, we state a general existence theorem for random exponential attractors which was proven in [7]. In Sect. 2.3, we recall the notion of pathwise mild solutions for stochastic evolution equations introduced in [26].

2.1 Random dynamical systems and random attractors

In order to quantify uncertainty, we describe an appropriate model of the noise. If not further specified, \((\Omega ,\mathcal {F},\mathbb {P})\) denotes a probability space. Moreover, X is a separable and reflexive Banach space and \(\Vert \cdot \Vert _X\) denotes the norm in X.

Definition 2.1

Let \(\theta :\mathbb {R}\times \Omega \rightarrow \Omega \) be a family of \(\mathbb {P}\)-preserving transformations (meaning that \(\theta _{t}\mathbb {P}=\mathbb {P}\) for all \(t\in \mathbb {R}\)) with the following properties:

(i):

the mapping \((t,\omega )\mapsto \theta _{t}\omega \) is \((\mathcal {B}(\mathbb {R})\otimes \mathcal {F},\mathcal {F})\)-measurable for all \(t\in \mathbb {R}, \omega \in \Omega \);

(ii):

\(\theta _{0}=\text {Id}_{\Omega }\);

(iii):

\(\theta _{t+s}=\theta _{t}\circ \theta _{s}\) for all \(t,s,\in \mathbb {R}\),

where \(\mathcal {B}(\mathbb {R})\) denotes the Borel \(\sigma \)-algebra. Then, the quadruple \((\Omega ,\mathcal {F},\mathbb {P},(\theta _{t})_{t\in \mathbb {R}})\) is called a metric dynamical system.

Remark 2.2

  1. (a)

    Here and in the sequel, we write \(\theta _{t}\omega \) for \(\theta (t,\omega ),\ t\in \mathbb {R},\omega \in \Omega \).

  2. (b)

    We always assume that \(\mathbb {P}\) is ergodic with respect to \((\theta _{t})_{t\in \mathbb {R}}\), i.e., any \((\theta _{t})_{t\in \mathbb {R}}\)-invariant subset has either zero or full measure.

Our aim is to introduce a metric dynamical system associated with a two-sided X-valued Wiener process.

For the sake of completeness, we recall the construction of such a process if X is not a Hilbert space. First, we introduce an auxiliary separable Hilbert space H and denote by \((W_H(t))_{t\ge 0}\) an H-cylindrical Brownian motion, i.e., \((W_{H}(t)h)_{t\ge 0}\) is a real-valued Brownian motion for every \(h\in H\) and \(\mathbb {E}[W_{H}(t)h\cdot W_H(s)g]=\min \{s,t\}[h,g]_{H}\) for \(s,t\ge 0\) and \(h,g\in H\), where \([\cdot ,\cdot ]\) denotes the inner product in H. Furthermore, an operator \(G:H\rightarrow X\) is called \(\gamma \)-radonifying if

$$\begin{aligned} \mathbb {E}\left\| \sum \limits _{n=1}^{\infty }\gamma _{n}G e_{n} \right\| _X^{2}<\infty , \end{aligned}$$

where \((\gamma _{n})_{n\in \mathbb {N}}\) is a sequence of independent standard Gaussian random variables and \((e_{n})_{n\in \mathbb {N}}\) is an orthonormal basis in H. If X is isomorphic to H, then the previous condition means that G is a Hilbert–Schmidt operator (notation: \(G\in \mathcal {L}_{2}(H)\)). In this framework, letting \((\tilde{e}_n)_{n\in \mathbb {N}}\) be an orthonormal basis of \((\text{ ker }G)^{\perp }\), we know according to [30, Prop. 8.8] that the series

$$\begin{aligned} \sum \limits _{n=1}^{\infty }W_{H}(t){\tilde{e}}_nGe_n \end{aligned}$$

converges almost surely and defines an X-valued Brownian motion. Its covariance operator is given by \(tGG^{*}\), where \(G^*\) denotes the adjoint. Moreover, every X-valued Brownian motion can be obtained in this way. Again, if X is isomorphic to H and \(G\in \mathcal {L}_{2}(H)\) with \(\Vert G\Vert _{\mathcal {L}_{2}(H)}=\text{ Tr }(G G^{*})\), then the previous definition entails a trace-class Wiener process. Finally, we extend this to a two-sided process in the standard way.

To obtain a metric dynamical system associated with such a process, we let \(C_{0}(\mathbb {R};X)\) denote the set of continuous X-valued functions which are zero at zero equipped with the compact open topology. We take \(\mathbb {P}\) as the Wiener measure on \(\mathcal {B}(C_{0}(\mathbb {R};X))\) having a covariance operator Q on X. Then, Kolmogorov’s theorem about the existence of a continuous version yields the canonical probability space \((C_{0}(\mathbb {R};X),\mathcal {B}(C_{0}(\mathbb {R};X)),\mathbb {P})\). Moreover, to obtain an ergodic metric dynamical system we introduce the Wiener shift, which is defined as

$$\begin{aligned} \theta _{t}\omega (\cdot {}):=\omega (t+\cdot {})-\omega (t)\quad \text{ for } \text{ all } t\in \mathbb {R}, \omega \in C_{0}(\mathbb {R};X). \end{aligned}$$
(2.1)

Throughout this manuscript, \(\theta _t\omega (\cdot )\) will always denote the Wiener shift.

We now recall the definition of a random dynamical system.

Definition 2.3

A continuous random dynamical system on X over a metric dynamical system \((\Omega ,\mathcal {F},\mathbb {P},(\theta _{t})_{t\in \mathbb {R}})\) is a mapping

$$\begin{aligned} \varphi :\mathbb {R^{+}}\times \Omega \times X\rightarrow X, (t,\omega ,x)\mapsto \varphi (t,\omega ,x), \end{aligned}$$

which is \((\mathcal {B}(\mathbb {R}^{+})\otimes \mathcal {F}\otimes \mathcal {B}(X),\mathcal {B}(X))\)-measurable and satisfies:

(i):

\(\varphi (0,\omega ,\cdot {})=Id _{X}\) for all \(\omega \in \Omega \);

(ii):

\( \varphi (t+\tau ,\omega ,x)=\varphi (t,\theta _{\tau }\omega ,\varphi (\tau ,\omega ,x)), \text{ for } \text{ all } x\in X, t,\tau \in \mathbb {R}^{+} \text{ and } \text{ all } \omega \in \Omega ;\)

(iii):

\(\varphi (t,\omega ,\cdot {}):X\rightarrow X\) is continuous for all \(t\in \mathbb {R}^{+}\) and \(\omega \in \Omega \).

The second property is referred to as the cocycle property and generalizes the semigroup property. In fact, if \(\varphi \) is independent of \(\omega \), (ii) reduces exactly to the semigroup property, i.e., \(\varphi (t+\tau ,x)=\varphi (t,\varphi (\tau ,x))\). For random dynamical systems, the evolution of the noise \((\theta _{t}\omega )\) has additionally to be taken into account.

Under suitable assumptions, the solution operator of a random differential equation generates a random dynamical system. Stochastic (partial) differential equations are more involved since stochastic integrals are defined almost surely, though the cocycle property must hold for all \(\omega \).

Referring to the monograph by Arnold [3], it is well known that stochastic (ordinary) differential equations generate random dynamical systems under suitable assumptions on the coefficients. This is due to the flow property, see [19], which can be deduced from Kolmogorov’s theorem about the existence of a (Hölder-) continuous random field with a finite-dimensional parameter range. Here, the parameters of this random field are the time and the nonrandom initial data.

Whether an SPDE generates a random dynamical system has been a long-standing open problem, since Kolmogorov’s theorem breaks down for random fields parameterized by infinite-dimensional Hilbert spaces, see [23]. As a consequence, the question of how a random dynamical system can be obtained from an SPDE is not trivial, since solutions are only defined almost surely which is insufficient for the cocycle property. In particular, there exist exceptional sets which depend on the initial condition, and if more than countably many exceptional sets occur, it is unclear how the random dynamical system can be defined. This problem was fully solved only under restrictive assumptions on the structure of the noise. More precisely, for SPDEs with additive or linear multiplicative noise, there are standard transformations which reduce these SPDEs in PDEs with random coefficients. Since random PDEs can be solved pathwise, the generation of the random dynamical system is straightforward.

Before we recall the notions of global and exponential random attractors, we need to introduce the class of tempered random sets. From now on, in this Sect. 2.2, when stating properties involving a random parameter, we assume, unless otherwise specified, that they hold on a \((\theta _t)_{t\in \mathbb {R}}\)-invariant subset of \(\Omega \) of full measure, i.e., there exists a \((\theta _t)_{t\in \mathbb {R}}\)-invariant subset \(\Omega _0\subset \Omega \) of full measure such that the property holds for all \(\omega \in \Omega _0.\) To simplify notations, we denote \(\Omega _0\) again by \(\Omega .\)

Definition 2.4

A multifunction \(\mathcal {B}=\{B(\omega )\}_{\omega \in \Omega }\) of nonempty closed subsets \(B(\omega )\) of X is called a random set if

$$\begin{aligned} \omega \mapsto \inf _{y\in B(\omega )}\Vert x-y\Vert _X \end{aligned}$$

is a random variable for each \(x\in X\).

The random set \(\mathcal {B}\) is bounded (or compact) if the sets \(B(\omega )\subset X\) are bounded (or compact) for all \(\omega \in \Omega .\)

Definition 2.5

A random bounded set \(\{B(\omega )\}_{\omega \in \Omega }\) of X is called tempered with respect to \((\theta _{t})_{t\in \mathbb {R}}\) if for all \(\omega \in \Omega \) it holds that

$$\begin{aligned} \lim \limits _{t\rightarrow \infty }\text {e}^{-\beta t}\sup \limits _{x\in B(\theta _{-t}\omega )}\Vert x\Vert _X=0\quad \text{ for } \text{ all } \beta >0. \end{aligned}$$

Here and in the sequel, we denote by \(\mathcal {D}\) the collection of tempered random sets in X.

Definition 2.6

Let \(\varphi \) be a random dynamical system on X. A random set \(\{\mathcal {A}(\omega )\}_{\omega \in \Omega }\in \mathcal {D}\) is called a \(\mathcal {D}\)-random (pullback) attractor for \(\varphi \) if the following properties are satisfied:

(a):

\(\mathcal {A}(\omega )\) is compact for every \(\omega \in \Omega \);

(b):

\(\{\mathcal {A}(\omega )\}_{\omega \in \Omega }\) is \(\varphi \)-invariant, i.e.,

$$\begin{aligned} \varphi (t,\omega ,\mathcal {A}(\omega ))=\mathcal {A}(\theta _{t}\omega ) \text{ for } \text{ all } t\ge 0, \omega \in \Omega ; \end{aligned}$$
(c):

\(\{\mathcal {A}(\omega )\}_{\omega \in \Omega }\) pullback attracts every set in \(\mathcal {D}\), i.e., for every \(D=\{D(\omega )\}_{\omega \in \Omega }\in \mathcal {D}\),

$$\begin{aligned} \lim \limits _{t\rightarrow \infty }d(\varphi (t,\theta _{-t}\omega ,D(\theta _{-t}\omega )),\mathcal {A}(\omega ))=0, \end{aligned}$$

where d denotes the Hausdorff semimetric in X, \(d(A,B)=\sup \nolimits _{a\in A}\inf \limits _{b\in B}\Vert a-b\Vert _X\), for any subsets \(A\subseteq X\) and \(B\subseteq X\).

The following theorem provides a criterion for the existence of random attractors, see Theorem 4 in [11]. The uniqueness follows from Corollary 1 in [11].

Theorem 2.7

There exists a \(\mathcal {D}\)-random (pullback) attractor for \(\varphi \) if and only if there exists a compact random set that pullback attracts all random sets \(D\in \mathcal {D}\). Moreover, the random (pullback) attractor is unique.

One way of proving the existence of the random attractor, that in addition implies its finite fractal dimension, is to show that a random exponential attractor exists. Exponential attractors are compact subsets of finite fractal dimension that contain the global attractor and are attracting at an exponential rate. This notion was first introduced for semigroups in the autonomous deterministic setting [12] and has later been extended for nonautonomous and random dynamical systems, see [7, 9] and the references therein.

Here, we consider so-called nonautonomous random exponential attractors, see [7]. While random exponential attractors in the strict sense are positively \(\varphi \)-invariant, nonautonomous random exponential attractors are positively \(\varphi \)-invariant in the weaker, nonautonomous sense. To construct exponential attractors for time-continuous random dynamical systems that are positively \(\varphi \)-invariant typically requires the Hölder continuity in time of the cocycle which is a restrictive assumption. However, if we relax the invariance property and consider nonautonomous random exponential attractors instead, only the Lipschitz continuity of the cocycle in space is needed. In fact, the construction can be essentially simplified, we obtain better bounds for the fractal dimension and the assumption of Hölder continuity in time can be omitted, see [7]. Even though we could prove the Hölder continuity in time for the cocycle for our particular problem, we omit it since it has no added value for our main results and would lead to weaker bounds for the fractal dimension.

Definition 2.8

A nonautonomous tempered random set \(\{\mathcal {M}(t,\omega )\}_{t\in \mathbb {R}, \omega \in \Omega }\) is called a nonautonomous \(\mathcal {D}\)-random (pullback) exponential attractor for \(\varphi \) if there exists \({\tilde{t}}>0\) such that \(\mathcal {M}(t+{\tilde{t}},\omega )=\mathcal {M}(t,\omega )\) for all \(t\in \mathbb {R},\omega \in \Omega ,\) and the following properties are satisfied:

(a):

\(\mathcal {M}(t,\omega )\) is compact for every \(t\in \mathbb {R},\omega \in \Omega \);

(b):

\(\{\mathcal {M}(t,\omega )\}_{t\in \mathbb {R},\omega \in \Omega }\) is positively \(\varphi \)-invariant in the nonautonomous sense, i.e.,

$$\begin{aligned} \varphi (s,\omega ,\mathcal {M}(t,\omega ))\subseteq \mathcal {M}(s+t,\theta _{s}\omega )\quad \text{ for } \text{ all } s\ge 0, t\in \mathbb {R}, \omega \in \Omega ; \end{aligned}$$
(c):

\(\{\mathcal {M}(t,\omega )\}_{t\in \mathbb {R}, \omega \in \Omega }\) is pullback \(\mathcal {D}\)-attracting at an exponential rate, i.e., there exists \(\alpha >0\) such that

$$\begin{aligned} \lim \limits _{s\rightarrow \infty }\text {e}^{\alpha s}d(\varphi (s,\theta _{-s}\omega ,D(\theta _{-s}\omega )),\mathcal {M}(t,\omega ))=0\quad \text {for all } D\in \mathcal {D}, t\in \mathbb {R},\omega \in \Omega ; \end{aligned}$$
(d):

the fractal dimension of \(\{\mathcal {M}(t,\omega )\}_{t\in \mathbb {R}, \omega \in \Omega }\) is finite, i.e., there exists a random variable \(k(\omega )\ge 0\) such that

$$\begin{aligned} \sup _{t\in \mathbb {R}}\text {dim}_f(\mathcal {M}(t,\omega ))\le k(\omega )<\infty \quad \text {for all } \omega \in \Omega . \end{aligned}$$

We recall that the fractal dimension of a precompact subset \(M\subset X\) is defined as

$$\begin{aligned} \text {dim}_f(M)=\limsup _{\varepsilon \rightarrow 0}\log _{\frac{1}{\varepsilon }}(N_\varepsilon (M)), \end{aligned}$$

where \(N_\varepsilon (M)\) denotes the minimal number of \(\varepsilon \)-balls in X with centers in M needed to cover the set M.

By Theorem 2.7, the existence of a nonautonomous random exponential attractor immediately implies that the (global) random attractor exists. Moreover, the global random attractor is contained in the random exponential attractor, and hence, its fractal dimension is finite.

Existence proofs for global and exponential random attractors are typically based on the existence of a pullback \(\mathcal {D}\)-absorbing set for \(\varphi \).

Definition 2.9

A set \(\{B(\omega )\}_{\omega \in \Omega }\in \mathcal {D}\) is called random pullback \(\mathcal {D}\)-absorbing for \(\varphi \) if for every \(D=\{D(\omega )\}_{\omega \in \Omega }\in \mathcal {D}\) and \(\omega \in \Omega \), there exists a random time \(T_{D}(\omega )\ge 0\) such that

$$\begin{aligned} \varphi (t,\theta _{-t}\omega ,D(\theta _{-t}\omega ))\subseteq B(\omega ) \quad \text{ for } \text{ all } t\ge T_{D}(\omega ). \end{aligned}$$

The following condition is convenient to show the existence of an absorbing set. Namely, if for every \(x\in D(\theta _{-t}\omega )\), \(D\in \mathcal {D}\) and \(\omega \in \Omega \), it holds that

$$\begin{aligned} \limsup \limits _{t\rightarrow \infty } \Vert \varphi (t,\theta _{-t}\omega ,x)\Vert _X\le \rho (\omega ), \end{aligned}$$
(2.2)

where \(\rho (\omega )>0\) for every \(\omega \in \Omega \), then the ball \(B(\omega ):=B(0,\rho (\omega )+\delta )\) centered in 0 with radius \(\rho (\omega )+\delta \) for some constant \(\delta >0\) is a random absorbing set. For further details and applications, see [5, 28].

Instead of considering random exponential attractors which is typically more involved and requires to verify additional properties of the cocycle, the existence of random attractors is frequently shown using the following result, see Theorem 2.1 in [28].

Theorem 2.10

Let \(\varphi \) be a continuous random dynamical system on X over \((\Omega ,\mathcal {F},\mathbb {P},(\theta _{t})_{t\in \mathbb {R}})\). Suppose that \(\{B(\omega )\}_{\omega \in \Omega }\) is a compact random absorbing set for \(\varphi \) in \(\mathcal {D}\). Then, \(\varphi \) has a unique \(\mathcal {D}\)-random attractor \(\{\mathcal {A}(\omega )\}_{\omega \in \Omega }\) which is given by

$$\begin{aligned} \mathcal {A}(\omega )=\bigcap \limits _{\tau \ge 0} \overline{\bigcup \limits _{t\ge \tau }\varphi (t,\theta _{-t}\omega ,B(\theta _{-t}\omega ))}. \end{aligned}$$

We could apply Theorem 2.10 to prove the existence of a random attractor for our particular problem. However, showing that a nonautonomous random exponential attractor exists does not only imply the existence of the random attractor, but also its finite fractal dimension. Moreover, it turns out to be even simpler in our case than applying Theorem 2.10. To this end, we use an existence result for random exponential attractors obtained in [7] that we recall in the next subsection.

2.2 An existence result for random exponential attractors

The existence result for random pullback exponential attractors is based on an auxiliary normed space that is compactly embedded into the phase space and the entropy properties of this embedding. We recall some notions and results that we will need in the sequel, see also [7,8,9].

The (Kolmogorov) \(\varepsilon \)-entropy of a precompact subset M of a Banach space X is defined as

$$\begin{aligned} {\mathcal {H}}_\varepsilon ^X(M)=\log _2(N_\varepsilon ^X(M)), \end{aligned}$$

where \(N_\varepsilon ^X(M)\) denotes the minimal number of \(\varepsilon \)-balls in X with centers in M needed to cover the set M. It was first introduced by Kolmogorov and Tihomirov [14]. The order of growth of \({\mathcal {H}}_\varepsilon ^X(M)\) as \(\varepsilon \) tends to zero is a measure for the massiveness of the set M in X, even if its fractal dimension is infinite.

If X and Y are Banach spaces such that the embedding \(Y\hookrightarrow X\) is compact, we use the notation

$$\begin{aligned} {\mathcal {H}}_\varepsilon (Y;X)={\mathcal {H}}_\varepsilon ^X(B^Y(0,1)), \end{aligned}$$

where \(B^Y(0,1)\) denotes the closed unit ball in Y.

Remark 2.11

The \(\varepsilon \)-entropy is related to the entropy numbers \({\hat{e}}_k\) for the embedding \(Y\hookrightarrow X,\) which are defined by

$$\begin{aligned} {\hat{e}}_k=\inf \left\{ \varepsilon >0 : B^Y(0,1)\subset \bigcup _{j=1}^{2^{k-1}}B^X(x_j,\varepsilon ),\ x_j\in X, \ j=1,\dots ,2^{k-1}\right\} , \end{aligned}$$

\(k\in \mathbb {N}.\) If the embedding is compact, then \({\hat{e}}_k\) is finite for all \(k\in \mathbb {N}\). For certain function spaces, the entropy numbers can explicitly be estimated (see [13]). For instance, if \(D\subset \mathbb {R}^n\) is a smooth bounded domain, then the embedding of the Sobolev spaces

$$\begin{aligned} W^{l_1,p_1}(D)\hookrightarrow W^{l_2,p_2}(D),\qquad l_1,l_2\in \mathbb {R}, \ p_1,p_2\in (1,\infty ), \end{aligned}$$

is compact if \(l_1>l_2\) and \(\frac{l_1}{n} - \frac{1}{p_1} > \frac{l_2}{n}-\frac{1}{p_2}.\) Moreover, the entropy numbers grow polynomially, namely

$$\begin{aligned} {\hat{e}}_k \simeq k^{-\frac{l_1-l_2}{n}} \end{aligned}$$

(see Theorem 2, Section 3.3.3 in [13]), and consequently,

$$\begin{aligned} {\mathcal {H}}_\varepsilon (W^{l_1,p_1}(D);W^{l_2,p_2}(D))\le c \varepsilon ^{-\frac{n}{l_1-l_2}}, \end{aligned}$$

for some constant \(c>0\). Here, we write \(f\simeq g,\) if there exist positive constants \(c_1\) and \(c_2\) such that

$$\begin{aligned} c_1f\le g \le c_2f. \end{aligned}$$

The following existence result for nonautonomous random pullback exponential attractors is a special case of the main result in [7]. In fact, we formulate a simplified version that suffices for the parabolic stochastic evolution problem we consider. In particular, we assume that the cocycle is uniformly Lipschitz continuous and satisfies the smoothing property with a constant that is independent of \(\omega \). More generally, one can allow that the constants depend on the random parameter \(\omega \) and that the cocycle is asymptotically compact, i.e., it is the sum of a mapping satisfying the smoothing property and a contraction.

Theorem 2.12

Let \(\varphi \) be a random dynamical system in a separable Banach space X, and let \(\mathcal {D}\) denote the universe of tempered random sets. Moreover, we assume that the following properties hold for all \(\omega \in \Omega \):

\((H_1)\):

Compact embedding: There exists another separable Banach space Y that is compactly and densely embedded into X.

\((H_2)\):

Random pullback absorbing set: There exists a random closed set \(B\in \mathcal {D}\) that is pullback \(\mathcal {D}\)-absorbing, and the absorbing time corresponding to a random set \(D\in \mathcal {D}\) satisfies \(T_{D,\theta _{-t}\omega }\le T_{D,\omega }\) for all \(t\ge 0\).

\((H_3)\):

Smoothing property: There exist \({\tilde{t}}>T_{B,\omega }\) and a constant \(\kappa >0\) such that

$$\begin{aligned} \Vert \varphi ({\tilde{t}},\omega ,u)-\varphi ({\tilde{t}},\omega ,v)\Vert _Y\le \kappa \Vert u-v\Vert _X\qquad \forall u,v\in B(\omega ). \end{aligned}$$
\((H_4)\):

Lipschitz continuity: There exists a constant \(L_\varphi >0\) such that

$$\begin{aligned} \Vert \varphi (s,\omega ,u)-\varphi (s,\omega ,v)\Vert _{X}\le L_\varphi \Vert u-v\Vert _{X}\qquad \forall s\in [0,\tilde{t}],\ u,v\in B(\omega ). \end{aligned}$$

Then, for every \(\nu \in (0,\frac{1}{2})\) there exists a nonautonomous random pullback exponential attractor, and its fractal dimension is uniformly bounded by

$$\begin{aligned} \text {dim}_f(\mathcal {M}^\nu (t,\omega ))\le \log _{\frac{1}{2\nu }}\left( N_{\frac{\nu }{\kappa }}^X(B^Y(0,1))\right) \qquad \forall t\in \mathbb {R},\ \omega \in \Omega . \end{aligned}$$

2.3 Pathwise mild solutions for parabolic SPDEs

Let \(\Delta :=\{(s,t)\in \mathbb {R}^2: s\le t\}\), X be a separable, reflexive, type 2 Banach space and \(({\overline{\Omega }},\overline{\mathcal {F}}, \overline{\mathbb {P}})\) be a probability space. Similarly to [26], we consider nonautonomous SPDEs of the form

$$\begin{aligned} du(t)&= A(t,\overline{\omega }) u(t) ~{\text {d}}t + F(u(t)) ~{\text {d}}t + \sigma (t,u(t))~{\text {d}}W_{t},&t>s,\nonumber \\ u(s)&=u_{0} \in X,&s\in \mathbb {R}, \end{aligned}$$
(2.3)

where \(A=\{A(t,{\overline{\omega }})\}_{t\in \mathbb {R},\overline{\omega }\in \overline{\Omega }}\) is a family of time-dependent random differential operators. Intuitively, this means that the differential operator depends on a stochastic processes, in a meaningful way which will be specified later.

We aim to investigate the longtime behavior of (2.3) using a random dynamical systems approach. First, we recall sufficient conditions that ensure that the family A generates a parabolic stochastic evolution system, see [26]. In particular, we make the following assumptions concerning measurability, sectoriality and Hölder continuity of the operators.

Assumption 1

  1. (A0)

    We assume that the operators are closed, densely defined and have a common domain, \(\mathcal {D}_A:=D(A(t,{\overline{\omega }}))\) for all \(t\in \mathbb {R}\), \(\overline{\omega }\in \overline{\Omega }\).

  2. (A1)

    The mapping \(A:\mathbb {R}\times \overline{\Omega }\rightarrow \mathcal {L}(\mathcal {D}_A,X)\) is strongly measurable and adapted.

  3. (A2)

    There exists \(\vartheta \in (\pi ,\frac{\pi }{2})\) and \(M>0\) such that \(\Sigma _\vartheta :=\{\mu \in : |\text {arg }\mu |<\vartheta \}\subset \rho (A(t,\overline{\omega }))\) and

    $$\begin{aligned} \Vert R(\mu ,A(t,\overline{\omega }))\Vert _{\mathcal {L}(X)}\le \frac{M}{|\mu |+1}\qquad \text {for all}\ \mu \in \Sigma _\vartheta \cup \{0\}, t\in \mathbb {R}, \overline{\omega }\in \overline{\Omega }. \end{aligned}$$
  4. (A3)

    There exists \(\nu \in (0,1]\) and a mapping \(C:\overline{\Omega }\rightarrow X\) such that

    $$\begin{aligned} \Vert A(t,\overline{\omega }) - A(s,\overline{\omega })\Vert _{\mathcal {L}(\mathcal {D}_A,X)} \le C({\overline{\omega }}) |t-s|^{\nu }\qquad \text {for all}\ s,t\in \mathbb {R},~ \overline{\omega }\in \overline{\Omega }, \end{aligned}$$
    (2.4)

    where we assume that \(C({\overline{\omega }})\) is uniformly bounded with respect to \({\overline{\omega }}\), see [26].

Assumptions (A2) and (A3) are referred to in the literature as the Kato–Tanabe assumptions, compare [2], p. 55, or [24], p. 150, and are common in the context of nonautonomous evolution equations. Since the constants in (A2) and (A3) are uniformly bounded w.r.t. \(\overline{\omega }\), all constants arising in the estimates below do not dependent on \(\overline{\omega }\).

In the sequel, we denote by \(X_\eta \), \(\eta \in (-1,1]\), the fractional power spaces \(D((-A(t,\overline{\omega }))^\eta )\) endowed with the norm \(\Vert x\Vert _{X_\eta }=\Vert (-A(t,{\overline{\omega }}))^\eta x\Vert _X\) for \(t\in \mathbb {R}\), \(\overline{\omega }\in {\overline{\Omega }}\) and \(x\in X_\eta \).

Assumption 2

  1. (AC)

    We assume that the operators \(A(t,\overline{\omega }), t\in \mathbb {R},\overline{\omega }\in \overline{\Omega },\) have a compact inverse. This implies that the embeddings \(X_\eta \hookrightarrow X\), \(\eta \in (0,1]\), are compact.

  2. (U)

    The evolution family is uniformly exponentially stable, i.e., there exist constants \(\lambda >0\) and \(c>0\) such that

    $$\begin{aligned}&\Vert U(t,s,\overline{\omega })\Vert _{\mathcal {L}(X)} \le c e ^{-\lambda (t-s)} \quad \text{ for } \text{ all } (s,t)\in \Delta \text{ and } \overline{\omega }\in \overline{\Omega }. \end{aligned}$$
    (2.5)
  3. (Drift)

    The nonlinearity \(F:X\rightarrow X\) is globally Lipschitz continuous, i.e., there exists a constant \(C_{F}>0\) such that

    $$\begin{aligned} \Vert F(x) -F(y)\Vert _{X}\le C_{F}\Vert x-y\Vert _{X}\quad \text{ for } \text{ all } x,y\in X \text{ and } {\overline{\omega }}\in \overline{\Omega }. \end{aligned}$$

    This implies a linear growth condition on F. Namely, there exist a positive constant \(\overline{C}_{F}\) such that

    $$\begin{aligned} \Vert F(x)\Vert _{X}\le \overline{C}_{F} + C_{F}\Vert x\Vert _{X}\quad \text{ for } \text{ all } x\in X \text{ and } {\overline{\omega }}\in \overline{\Omega }. \end{aligned}$$
    (2.6)

    Furthermore, we assume that \(\lambda - c C_{F}>0\).

  4. (Noise)

    We assume that W(t) is a two-sided Wiener process with values in \(X_{\beta }\), \(\beta \in (0,1]\). Furthermore, we set \(\sigma (t,u):=\sigma >0\).

Based on the Assumptions 1, by applying [1, Thm. 2.3] pointwise in \(\overline{\omega }\in \overline{\Omega }\) we obtain the following theorem, see [26, Theorem 2.2]. The measurability was shown in [26, Proposition 2.4]. Before we state the result, we recall the definition of strong measurability of random operators.

Definition 2.13

Let \(X_1\) and \(X_2\) be two separable Banach spaces. A random operator \(L:\overline{\Omega }\times X_1\rightarrow X_2\) is called strongly measurable if the mapping \(\overline{\omega }\mapsto L(\overline{\omega })x\), \({\bar{\omega }}\in \overline{\Omega }\), is a random variable on \(X_2\) for every \(x\in X_1\).

Theorem 2.14

There exists a unique parabolic evolution system \(U:\Delta \times \overline{\Omega }\rightarrow \mathcal {L}(X)\) with the following properties:

(1):

\(U(t,t,\overline{\omega })=\text{ Id }\) for all \(t\ge 0\), \(\overline{\omega }\in \overline{\Omega }\).

(2):

We have

$$\begin{aligned} U(t,s,\overline{\omega })U(s,r,\overline{\omega })=U(t,r,\overline{\omega }) \end{aligned}$$
(2.7)

for all \(0\le r\le s\le t\), \(\overline{\omega }\in \overline{\Omega }\).

(3):

The mapping \(U(\cdot ,\cdot ,{\overline{\omega }})\) is strongly continuous for all \(\overline{\omega }\in \overline{\Omega }\).

(4):

For \(s<t\), the following identity holds pointwise in \(\overline{\Omega }\)

$$\begin{aligned} \frac{d}{dt}U(t,s,\overline{\omega })=A(t,\overline{\omega })U(t,s,\overline{\omega }). \end{aligned}$$
(5):

The evolution system \(U:\Delta \times \overline{\Omega }\rightarrow \mathcal {L}(X)\) is strongly measurable in the uniform operator topology. Moreover, for every \(t\ge s\), the mapping \(\overline{\omega }\mapsto U(t,s,\overline{\omega })\in \mathcal {L}(X)\) is strongly \(\mathcal {F}_t\)-measurable in the uniform operator topology.

To prove the existence of random attractors, we need additional smoothing properties of the evolution system. The following properties and estimates were shown in Lemmas 2.6 and 2.7 in [26]. The exponential decay is a consequence of our assumption (U).

Lemma 2.15

We assume that the family of adjoint operators \(A^*(t,\overline{\omega })\) satisfies (A3) with exponent \(\nu ^*>0\). Then, for every \(t>0\), the mapping \(s\mapsto U(t,s,\overline{\omega })\) belongs to \(C^1([0,t);\mathcal {L}(X))\), and for all \(x\in \mathcal {D}_A\) one has

$$\begin{aligned} \frac{d}{ds}U(t,s,\overline{\omega })x=-U(t,s,\overline{\omega })A(s,\overline{\omega })x. \end{aligned}$$

Moreover, for \(\alpha \in [0,1]\) and \(\eta \in (0,1)\) there exist positive constants \({\widetilde{C}}_\alpha , {\widetilde{C}}_{\alpha ,\eta }\) such that the following estimates hold for \(t>s\) and \({\bar{\omega }}\in \overline{\Omega }\):

$$\begin{aligned} \Vert (-A(t,{\overline{\omega }}))^\alpha U(t,s,{\overline{\omega }})x\Vert _X&\le {\widetilde{C}}_\alpha \frac{\text {e}^{-\lambda (t-s)}}{(t-s)^{\alpha }}\Vert x\Vert _X,&x\in X;\\ \Vert U(t,s,{\overline{\omega }})(-A(s,{\overline{\omega }}))^\alpha x\Vert _X&\le {\widetilde{C}}_\alpha \frac{e^{-\lambda (t-s)}}{(t-s)^{\alpha }}\Vert x\Vert _X,&x\in X_\alpha ;\\ \Vert (-A(t,{\overline{\omega }}))^{-\alpha } U(t,s,{\overline{\omega }}) (-A(s,{\overline{\omega }}))^\eta x\Vert _X&\le {\widetilde{C}}_{\alpha ,\eta } \frac{\text {e}^{-\lambda (t-s)}}{(t-s)^{\eta -\alpha }}\Vert x\Vert _X,&x\in X_\eta . \end{aligned}$$

To shorten notations, in the sequel we omit the \(\overline{\omega }\)-dependence of A and U if there is no danger of confusion. The classical mild formulation of the SPDE  (2.3) is

$$\begin{aligned} u(t) = U(t,0)u_{0} + \int \limits _{0}^{t} U(t,s)F(u(s)) ~{\text {d}}s + \sigma \int \limits _{0}^{t} U(t,s)~{\text {d}}W(s). \end{aligned}$$
(2.8)

However, the Itô-integral is not well defined since the mapping \(\overline{\omega }\mapsto U(t,s,\overline{\omega })\) is, in general, only \(\mathcal {F}_{t}\)-measurable and not \(\mathcal {F}_{s}\)-measurable, see [26, Prop. 2.4]. To overcome this problem, Pronk and Veraar introduced in [26] the concept of pathwise mild solutions. In our particular case, this notion leads to the integral representation

$$\begin{aligned} u(t)=&\ U (t,0) u_{0} + \sigma U(t,0) W(t) + \int \limits _{0}^{t} U(t,s)F(u(s))~{\text {d}}s\nonumber \\&\ -\sigma \int \limits _{0}^{t}U(t,s)A(s) (W(t) -W(s)) ~{\text {d}}s. \end{aligned}$$
(2.9)

The formula is motivated by formally applying integration by parts for the stochastic integral, and, as shown in [26], it indeed yields a pathwise representation for the solution.

Our aim is to show the existence of random attractors for SPDEs using this concept of pathwise mild solutions. It allows us to study random attractors without transforming the SPDE into a random PDE, as it is typically done.

Remark 2.16

We emphasize that the concept of pathwise mild solutions also applies if \(\sigma \) is not constant, see [26, Sec.5]. In this case, the solution of (2.3) is given by

$$\begin{aligned} u(t)&= U(t,0)u_{0} + U(t,0) \int \limits _{0}^{t}\sigma (s,u(s))~{\text {d}}W(s) + \int \limits _{0}^{t} U(t,s)F(u(s))~{\text {d}}s \end{aligned}$$
(2.10)
$$\begin{aligned}&\quad - \int \limits _{0}^{t} U(t,s)A(s)\int \limits _{s}^{t}\sigma (\tau ,u(\tau ))~{\text {d}}W(\tau )~{\text {d}}s. \end{aligned}$$
(2.11)

However, it is not possible to obtain a random dynamical system in this case, due to the presence of the stochastic integrals in (2.10) and (2.11) which are not defined in a pathwise sense. Consequently, this representation formula does not hold for every \(\overline{\omega }\in \overline{\Omega }\). We aim to investigate this issue in a future work.

Recalling that W is an \(X_{\beta }\)-valued Wiener process, we introduce the canonical probability space

$$\begin{aligned} \Omega :=(C_{0}(\mathbb {R};X_{\beta }),\mathcal {B}(C_{0}(\mathbb {R};X_{\beta })),\mathbb {P}) \end{aligned}$$
(2.12)

and identify \(W(t,\omega )=:\omega (t)\), for \(\omega \in \Omega \). Moreover, together with the Wiener shift,

$$\begin{aligned} \theta _{t}\omega (s)=\omega (t+s)-\omega (t), \quad \omega \in \Omega , \ s,t\in \mathbb {R}, \end{aligned}$$

we obtain, analogously as in Sect. 2.1, the ergodic metric dynamical system \((\Omega ,\mathcal {F},\mathbb {P},(\theta _{t})_{t\in \mathbb {R}})\).

In the following, \((\Omega ,\mathcal {F},\mathbb {P})\) always denotes the probability space (2.12).

3 Random attractors for nonautonomous random SPDEs

3.1 Random dynamical system and absorbing set

Since we consider SPDEs with time-dependent random differential operators, we need to impose additional structural assumptions in order to use the framework of random dynamical systems, see [6, 20].

Assumption 3

  1. (RDS)

    We assume that the generators depend on t and \(\omega \) in the following way:

    $$\begin{aligned} A(t,\omega )=A(\theta _{t}\omega )\quad \text{ for } \text{ all } t\in \mathbb {R} \text{ and } \omega \in \Omega . \end{aligned}$$
    (3.1)

This assumption is needed to obtain the cocycle property. In this case, the group property of the metric dynamical system implies hat

$$\begin{aligned} A(\theta _{s}\theta _{t-s}\omega )=A(\theta _{t}\omega )\quad \text{ for } \text{ all } t, s\in \mathbb {R} \text{ and } \omega \in \Omega . \end{aligned}$$

Moreover, one can easily show that \(A(\theta _t\omega )\) generates a random dynamical system, i.e., the solution operator corresponding to the linear evolution equation

$$\begin{aligned} {\text {d}}u(t)&= A(\theta _t\omega ) u(t)~{\text {d}}t\\ u(0)&=u_{0}\in X \end{aligned}$$

forms a random dynamical system.

From now on, we always assume that Assumptions 1, 2 and 3 hold and that the family of adjoint operators \(A^*\) satisfies (A3) with \(\nu ^*\in (0,1].\)

Theorem 3.1

Let \(U:\Delta \times \Omega \rightarrow \mathcal {L}(X)\) be the evolution operator generated by \(A(\theta _t\omega )\). Then, \({\widetilde{U}}:\mathbb {R}^{+}\times \Omega \times X\rightarrow X\) defined as

$$\begin{aligned} {\widetilde{U}}(t,\omega ):=U(t,0,\omega ),\quad t\ge 0, \end{aligned}$$
(3.2)

is a random dynamical system.

Proof

The cocycle property immediately follows from (2.7). In fact, let \(t,s\ge 0\). Then,  (2.7) implies that

$$\begin{aligned} U(t+s,0,\omega )=U(t+s,s,\omega )U(s,0,\omega ). \end{aligned}$$

Moreover, we observe that \(U(t+s,s,\omega )=U(t,0,\theta _{s}\omega )\) since \(A(\theta _{t}\omega )=A(\theta _{s}\theta _{t-s}\omega )\). Intuitively, this means that starting at time s on the \(\omega \)-fiber of the noise and letting time \(t>0\) pass lead to the same state as starting at time zero on the shifted \(\theta _s\)-fiber of the noise and letting the system evolve for time t. At the level of random generators, \(U(t+s,s,\omega )\) is obtained from \(A(\theta _t\omega )\) which is the same as \(A(\theta _s \theta _{t-s}\omega )\) due to the properties of the metric dynamical system. Therefore, the cocycle property

$$\begin{aligned} {\widetilde{U}}(t+s,\omega )={\widetilde{U}}(t,\theta _{s}\omega ) {\widetilde{U}}(s,\omega ). \end{aligned}$$
(3.3)

is satisfied. The measurability of \(\tilde{U}\) follows from Theorem 2.14, Property 5. \(\square \)

Remark 3.2

We obtain the measurability of \({\widetilde{U}}\) directly from the results in [26]. Alternatively, one can show the measurability of \(\tilde{U}\) as in [20, Lem. 14] using Yosida approximations of \(A(\omega )\). Here, one argues that the evolution operators corresponding to these approximations are strongly measurable and then pass to the limit. These arguments exploit the structural assumption (3.1). The proof of the measurability in [26] is more involved and holds under more general assumptions.

We give a standard example of a random nonautonomous generator and its corresponding evolution operator.

Example 3.3

A simple example for an operator that satisfies our assumptions is a random perturbation of a uniformly elliptic operator A (in a smooth bounded domain with homogeneous Dirichlet boundary conditions) by a real-valued Ornstein–Uhlenbeck process, which is the stationary solution of the Langevin equation

$$\begin{aligned} {\text {d}}z(t) = -\,\mu z(t)~dt + ~{\text {d}}\overline{W}(t). \end{aligned}$$

Here, \(\mu >0\) and \(\overline{W}\) is a two-sided real-valued Brownian motion. We denote by \((\overline{\Omega },\overline{\mathcal {F}},\overline{\mathbb {P}})\) its associated canonical probability space and make the identification \(\overline{W}(t,\overline{\omega }):=\overline{\omega }(t)\) for \({\overline{\omega }}\in {\overline{\Omega }}\). Then, we have that

$$\begin{aligned} z(\theta _{t}{\overline{\omega }}) =\int \limits _{-\infty }^{t} \text {e}^{-\mu (t-s)}~{\text {d}}\overline{\omega }(s) =\int \limits _{-\infty }^{0} \text {e}^{\mu s}~{\text {d}}\theta _{t}{\overline{\omega }}(s). \end{aligned}$$

In this case, the parabolic evolution operator generated by \(A+z(\theta _t{\overline{\omega }})\) is

$$\begin{aligned} {\widetilde{U}}(t,\omega ) :=T(t) \text {e}^{\int \limits _{0}^{t}z(\theta _{\tau }{\overline{\omega }})~{\text {d}}\tau }, \end{aligned}$$

where \((T(t))_{t\ge 0}\) is the analytic \(C_{0}\)-semigroup generated by A. We have

$$\begin{aligned} {\widetilde{U}}(t,{\overline{\omega }}) = \underbrace{T(t-s)\text {e}^{\int \limits _{s}^{t}z(\theta _{\tau }{\overline{\omega }})~ {\text {d}}\tau }}_{U(t,s,{\overline{\omega }})} \underbrace{T(s)\text {e}^{\int \limits _{0}^{s}z(\theta _{\tau }{\overline{\omega }}) ~{\text {d}}\tau }}_{U(s,0,{\overline{\omega }})}, \end{aligned}$$

and consequently, \({\widetilde{U}}(t-s,\theta _s{\overline{\omega }})=T(t-s)\text {e}^{\int \limits _{0}^{t-s}z(\theta _{\tau +s}{\overline{\omega }})} ~{\text {d}}\tau \).

This simple example illustrates that the formalism we introduced above is meaningful. Further examples of random time-dependent generators are provided in Sect. 4. For additional applications, we refer to [26], and to [6, 20] in the context of random dynamical systems.

We now prove the existence of random attractors for SPDEs of the form

$$\begin{aligned} {\left\{ \begin{array}{ll} {\text {d}}u(t) = A(\theta _{t}\omega ) u(t) ~{\text {d}}t + F(u(t)) ~{\text {d}}t + \sigma ~{\text {d}}\omega (t)\\ u(0)=u_{0} \end{array}\right. } \end{aligned}$$
(3.4)

using pathwise mild solutions as defined in (2.9).

Remark 3.4

  • We emphasize that the SPDE (3.4) cannot be transformed into a PDE with random coefficients using the stationary Ornstein–Uhlenbeck process, since the convolution

    $$\begin{aligned} \int \limits _{0}^{t} U(t,s,\omega )~{\text {d}}\omega (s) \end{aligned}$$

    is not defined and one has to make sense of it using the integration by parts formula

    $$\begin{aligned}&\omega (t) + \int \limits _{0}^{t} U(t,s,\omega )A(\theta _s\omega ) \omega (s)~{\text {d}}s = U(t,0,\omega )\omega (t) \nonumber \\&\qquad -\, \int \limits _{0}^{t} U(t,s,\omega )A(\theta _s\omega )(\omega (t) -\omega (s))~{\text {d}}s. \end{aligned}$$
    (3.5)

    A different transformation based on a strictly stationary solution has been introduced in [15].

  • Another approach would be to subtract the noise, i.e., to introduce the change of variables \(v:=u-\sigma \omega \). This would formally lead to the random PDE

    $$\begin{aligned} {\text {d}}v(t)&= A(\theta _{t}\omega ) (v(t) +\sigma \omega (t)) ~{\text {d}}t + F (v(t) +\omega (t) )~{\text {d}}t\nonumber \\&= A(\theta _{t}\omega ) v(t)~{\text {d}}t + \sigma A(\theta _{t}\omega )\omega (t)~{\text {d}}t + F (v(t) +\omega (t) )~{\text {d}}t. \end{aligned}$$
    (3.6)

    The mild solution of (3.6) would be given by

    $$\begin{aligned} v(t)= & {} U(t,0,\omega ) v_0 + \sigma \int \limits _{0}^{t} U(t,s,\omega )A(\theta _s\omega )\omega (s)~{\text {d}}s \nonumber \\&+ \int \limits _{0}^{t} U(t,s,\omega )F(v(s)+\sigma \omega (s))~{\text {d}}s. \end{aligned}$$

    However, we would need to justify that this mild solution is well defined, and the noise also interacts with the nonlinear term. In order to avoid these difficulties, we work with pathwise mild solutions.

Using (3.5), the representation formula of a solution for (3.4) reads as

$$\begin{aligned} u(t)&= U(t,0)u_{0} + \sigma U(t,0)\omega (t) + \int \limits _{0}^{t}U(t,s)F(u(s))~{\text {d}}s \nonumber \\&\quad - \sigma \int \limits _{0}^{t} U(t,s)A(\theta _{s}\omega ) (\omega (t) -\omega (s)) ~{\text {d}}s\nonumber \\&= U(t,0)u_{0} + \sigma U(t,0)\omega (t) +\int \limits _{0}^{t}U(t,s)F(u(s))~{\text {d}}s\nonumber \\&\qquad - \sigma \int \limits _{0}^{t} U(t,s)A(\theta _{s}\omega ) \theta _{s}\omega (t-s) ~{\text {d}}s\nonumber \\&= {\widetilde{U}}(t,\omega )u_{0} + \sigma {\widetilde{U}}(t,\omega )\omega (t) + \int \limits _{0}^{t} {\widetilde{U}}(t-s,\theta _{s}\omega )F(u(s))~{\text {d}}s \end{aligned}$$
(3.7)
$$\begin{aligned}&\quad -\sigma \int \limits _{0}^{t}{\widetilde{U}}(t-s,\theta _{s}\omega )A(\theta _{s}\omega ) \theta _{s}\omega (t-s)~{\text {d}}s\nonumber \\&= {\widetilde{U}}(t,\omega )u_{0} +\int \limits _{0}^{t} {\widetilde{U}}(t-s,\theta _{s}\omega ) F(u(s)) ~{\text {d}}s\nonumber \\&\qquad +\sigma \omega (t) +\sigma \int \limits _{0}^{t} {\widetilde{U}}(t-s,\theta _{s}\omega )A(\theta _{s}\omega ) \omega (s)~{\text {d}}s. \end{aligned}$$
(3.8)

Here, we used in the last line that

$$\begin{aligned} \int \limits _{0}^{t} U(t,s,\omega )A(s,\omega )~{\text {d}}s=&\int \limits _{0}^{t}{\widetilde{U}}(t-s,\theta _{s}\omega ) A(\theta _{s}\omega )~{\text {d}}s = -U(t,t,\omega )\\&+ U(t,0,\omega )={\widetilde{U}}(t,\omega )- \text{ Id }, \end{aligned}$$

since

$$\begin{aligned} \frac{\partial }{\partial s} U(t,s,\omega )=-U(t,s,\omega )A(s,\omega ). \end{aligned}$$

Remark 3.5

We emphasize that the pathwise mild solution concept is applicable also under weaker assumptions on the noise, for instance, if \(\omega \) takes values in a suitable extrapolation space [25, Section 3.1]. Moreover, the formal computations made in (3.8) can be justified even if \(\omega \not \in \mathcal {D}_A\).

In fact, according to [26, Theorem 4.9] we know that the pathwise mild solution is equivalent to the weak solution of (3.4). For simplicity, we test the linear part (i.e., for \(F\equiv 0\)) of (3.4) with \(x^{*}\in \mathcal {D}_{A^*}:= D((A^*(t)))\). This yields

$$\begin{aligned} \langle u(t),x^{*}\rangle&= \langle U(t,0)u_0,x^{*}\rangle +\sigma \langle U(t,0)\omega (t),x^{*}\rangle \nonumber \\&\qquad - \sigma \int \limits _{0}^{t} \langle U(t,s) A(\theta _s\omega ) (\omega (t)-\omega (s) ),x^{*}\rangle ~{\text {d}}s, \end{aligned}$$
(3.9)

where \(\langle \cdot ,\cdot \rangle \) denotes the dual pairing. Plugging the identity

$$\begin{aligned} \int \limits _{0}^{t}\langle U(t,s) A(\theta _s\omega )\omega (t),x^{*}\rangle ~{\text {d}}s = \langle U(t,0)\omega (t),x^{*}\rangle -\langle \omega (t),x^{*}\rangle , \end{aligned}$$

which holds for \(\omega (\cdot )\in X\) (see [26, Section 4.4]), into  (3.9) entails

$$\begin{aligned} \langle u(t),x^{*}\rangle = \langle U(t,0)u_0,x^{*}\rangle + \sigma \langle \omega (t), x^{*} \rangle +\sigma \int \limits _{0}^{t}\langle U(t,s)A(\theta _s\omega )\omega (s), x^{*}\rangle ~{\text {d}}s. \end{aligned}$$

Lemma 3.6

The solution operator corresponding to (3.4) generates a random dynamical system \(\varphi :\mathbb {R}^{+}\times \Omega \times X \rightarrow X\).

Proof

We only verify the cocycle property. The continuity is straightforward, and the measurability of \(\varphi \) follows from the measurability of \(\tilde{U}\).

Let \(s,t\ge 0\). Using (3.8), we have

$$\begin{aligned}&\ \varphi (t+s,\omega ,u_{0}) \\&\quad =\ {\widetilde{U}}(t+s,\omega )u_{0} + \int \limits _{0}^{t+s} {\widetilde{U}}(t+s-r,\theta _{r}\omega ) F(u(r)) ~{\text {d}}r+\sigma \omega (t+s) \\&\qquad +\sigma \int \limits _{0}^{t+s} {\widetilde{U}}(t+s-r,\theta _{r}\omega ) A(\theta _r\omega ) \omega (r) ~{\text {d}}r\\&\quad = \ {\widetilde{U}}(t,\theta _{s}\omega ) {\widetilde{U}}(s,\omega )u_{0} + {\widetilde{U}}(t,\theta _{s}\omega )\int \limits _{0}^{s}{\widetilde{U}}(s-r,\theta _{r}\omega ) F(u(r))~{\text {d}}r\\&\qquad + \int \limits _{s}^{s+t} {\widetilde{U}}(t+s-r,\theta _{r}\omega ) F(u(r)) dr+\sigma \omega (t+s)\\&\qquad +\sigma {\widetilde{U}}(t,\theta _{s}\omega ) \int \limits _{0}^{s} {\widetilde{U}}(s-r,\theta _{r}\omega ) A(\theta _{r}\omega )\omega (r)~{\text {d}}r \nonumber \\&\qquad +\sigma \int \limits _{s}^{s+t} {\widetilde{U}}(t+s-r,\theta _{r}\omega ) A(\theta _{r}\omega )\omega (r)~{\text {d}}r\\&\quad =\ {\widetilde{U}}(t,\theta _{s}\omega ) \Bigg [{\widetilde{U}}(s,\omega )u_{0} +\int \limits _{0}^{s}{\widetilde{U}}(s-r,\theta _{r}\omega )F(u(r))~{\text {d}}r \nonumber \\&\qquad +\sigma \int \limits _{0}^{s} {\widetilde{U}}(s-r,\theta _{r}\omega ) A(\theta _{r}\omega ) \omega (r)~{\text {d}}r \Bigg ] \\&\qquad + \int \limits _{s}^{s+t} {\widetilde{U}}(t+s-r,\theta _{r}\omega ) F(u(r)) ~{\text {d}}r +\sigma \omega (t+s) \nonumber \\&\qquad +\sigma \int \limits _{s}^{s+t} {\widetilde{U}}(t+s-r,\theta _{r}\omega ) A(\theta _{r}\omega )\omega (r)~{\text {d}}r. \end{aligned}$$

Using that

$$\begin{aligned}&\int \limits _{s}^{s+t} {\widetilde{U}}(t+s-r,\theta _{r}\omega ) A(\theta _{r}\omega )\omega (r)~{\text {d}}r= \int \limits _{0}^{t}{\widetilde{U}}(t-r,\theta _{s+r}\omega )A(\theta _{s+r}\omega ) \omega (r+s)~{\text {d}}r\\&\quad =\ \int \limits _{0}^{t} {\widetilde{U}}(t-r,\theta _{s+r}\omega )A(\theta _{s+r}\omega ) (\theta _{s}\omega (r)+ \omega (s) )~{\text {d}}r \\&\quad =\ \int \limits _{0}^{t} {\widetilde{U}}(t-r,\theta _{s+r}\omega )A(\theta _{s+r}\omega ) \theta _{s}\omega (r)~{\text {d}}r + \omega (s)(-\text{ Id } + {\widetilde{U}}(t,\theta _{s}\omega )), \end{aligned}$$

one immediately gets

$$\begin{aligned}&\varphi (t+s,\omega ,u_{0})\\&\quad =\ {\widetilde{U}}(t,\theta _{s}\omega ) \Bigg [{\widetilde{U}}(s,\omega )u_{0}\\&\qquad +\sigma \omega (s) +\int \limits _{0}^{s}{\widetilde{U}}(s-r,\theta _{r}\omega )F(u(r))~{\text {d}}r+\sigma \int \limits _{0}^{s} {\widetilde{U}}(s-r,\theta _{r}\omega )A(\theta _{r}\omega ) \omega (r)~{\text {d}}r \Bigg ]\\&\qquad +\int \limits _{0}^{t} {\widetilde{U}}(t-r,\theta _{s+r}\omega ) F(u(r+s)) ~{\text {d}}r \nonumber \\&\qquad + \sigma \int \limits _{0}^{t}{\widetilde{U}}(t-r,\theta _{s+r}\omega )A(\theta _{r+s}\omega )\theta _{s}\omega (r)~{\text {d}}r +\sigma \theta _{s}\omega (t)\\&\quad = \ \varphi (t,\theta _{s}\omega ,\varphi (s,\omega ,u_{0})). \end{aligned}$$

This proves the statement. \(\square \)

Next, we show the existence of an absorbing set. We recall that \(\lambda >cC_{F}\) as assumed in (Drift). Here, \(\lambda , c\) and \(C_F\) are the constants in (2.5) and (2.6).

From now on, the properties and statements hold for all \(\omega \in \Omega _0\) where \(\Omega _0\subset \Omega \) is the set of all \(\omega \) that have subexponential growth. The set \(\Omega _0\) is \((\theta _t)_{t\in \mathbb {R}}\)-invariant and has full measure, see, e.g., [5]. To simplify notations, in the sequel, we will denote \(\Omega _0\) again by \(\Omega .\)

Lemma 3.7

The random dynamical system \(\varphi \) has a pullback absorbing set.

Proof

We verify (2.2). To this end, we use the estimates in Lemma 2.15 and Gronwall’s lemma. We observe that \(\omega \in X_\beta \) implies that

$$\begin{aligned} \Vert {\widetilde{U}}(t,\omega )\omega (t)\Vert _X= \Vert {\widetilde{U}}(t,\omega )(-A(\omega ))^{-\beta } (-A(\omega ))^{\beta }\omega (t)\Vert _X \le {\hat{c}} \text {e}^{-\lambda t} \Vert \omega (t)\Vert _{X_{\beta }}, \end{aligned}$$

for some constant \({\hat{c}}>0.\)

It is convenient to work with the representation formula (3.7). Assuming that the fiber is given for \(\theta _{-t}\omega \), we obtain

$$\begin{aligned} \Vert u(t)\Vert _X\le & {} \Vert U(t,0)u_{0}\Vert _X + \sigma \Vert U(t,0)\theta _{-t}\omega (t)\Vert _X +\int \limits _{0}^{t}\Vert U(t,s)F(u(s))\Vert _X~{\text {d}}s\\&+ \sigma \int \limits _{0}^{t} \Vert U(t,s)A(\theta _{s-t}\omega ) \theta _{s-t}\omega (t-s) \Vert _X~{\text {d}}s\\\le & {} c \text {e}^{-\lambda t} \Vert u_{0}\Vert _X + \sigma {\hat{c}} \text {e}^{-\lambda t}\Vert \omega (-t)\Vert _{X^\beta } +\int \limits _{0}^{t}\Vert U(t,s)F(u(s))\Vert _X~{\text {d}}s\\&+ \sigma \int \limits _{0}^{t} \Vert {\widetilde{U}}(t-s,\theta _{s-t}\omega )A(\theta _{s-t}\omega ) \omega (s-t) \Vert _X~{\text {d}}s. \end{aligned}$$

For the nonlinear term, the Lipschitz continuity and (2.5) yield

$$\begin{aligned} \Bigg \Vert \int \limits _{0}^{t} {\widetilde{U}}(t-s,\theta _{s}\omega )F(u(s))~{\text {d}}s \Bigg \Vert _X\le c\int \limits _{0}^{t} \text {e}^{-\lambda (t-s)} (\overline{C}_{F} + C_{F}\Vert u(s)\Vert _X)~{\text {d}}s, \end{aligned}$$

and the generalized stochastic convolution results in

$$\begin{aligned}&\int \limits _{0}^{t}\Vert {\widetilde{U}}(t-s,\theta _{s-t}\omega )A(\theta _{s-t}\omega ) \omega (s-t)\Vert _X~{\text {d}}s \\&\quad = \int \limits _{0}^{t} \Vert {\widetilde{U}}(t-s,\theta _{s-t}\omega )(-A(\theta _{s-t}\omega ))^{1-\beta }\Vert _{\mathcal {L}(X)} \Vert (-A(\theta _{s-t}\omega ))^\beta \omega (s-t)\Vert _X~{\text {d}}s\\&\quad \le {\widetilde{C}}_{1-\beta }\int \limits _{0}^{t}\frac{\text {e}^{-\lambda (t-s)}}{(t-s)^{1-\beta }}\Vert \omega (s-t)\Vert _{X_{\beta }}~{\text {d}}s, \end{aligned}$$

where \({\widetilde{C}}_{1-\beta }\) denotes the constant in Lemma 2.15. Hence, combining the estimates we obtain

$$\begin{aligned} \Vert u(t)\Vert _X&\le c \text {e}^{-\lambda t} \Vert u_{0}\Vert _X + \sigma {\hat{c}} \text {e}^{-\lambda t} \Vert \omega (-t)\Vert _{X_{\beta }} +c\int \limits _{0}^{t} \text {e}^{-\lambda (t-s)}(\overline{C}_{F} +C_{F}\Vert u(s)\Vert _X )~{\text {d}}s\\&\quad + \sigma {\widetilde{C}}_{1-\beta }\int \limits _{0}^{t} \frac{\text {e}^{-\lambda (t-s)}}{{(t-s)^{1-\beta }}} \Vert \omega (s-t)\Vert _{X_{\beta }}~{\text {d}}s. \end{aligned}$$

Setting

$$\begin{aligned} \gamma (t)&:=c \text {e}^{-\lambda t}\Vert u_{0}\Vert _X +\sigma {\hat{c}}\text {e}^{-\lambda t}\Vert \omega (-t)\Vert _{X_{\beta }}\\&\quad +c\overline{C}_{F}\int \limits _{0}^{t} \text {e}^{-\lambda (t-s)} ~{\text {d}}s + \sigma {\widetilde{C}}_{1-\beta }\int \limits _{0}^{t} \frac{\text {e}^{-\lambda (t-s) }}{(t-s)^{1-\beta }}\Vert \omega (s-t)\Vert _{X_{\beta }}~{\text {d}}s\\&=c \text {e}^{-\lambda t}\Vert u_{0}\Vert _X +\sigma {\hat{c}}\text {e}^{-\lambda t}\Vert \omega (-t)\Vert _{X_{\beta }}+ \frac{c\overline{C}_{F}}{\lambda } \nonumber \\&\quad + \sigma {\widetilde{C}}_{1-\beta }\int \limits _{-t}^{0} \frac{\text {e}^{\lambda r }}{(-r)^{1-\beta }}\Vert \omega (r)\Vert _{X_{\beta }}~{\text {d}}s, \end{aligned}$$

we can rewrite the previous inequality as

$$\begin{aligned} \Vert u(t)\Vert _X \le \gamma (t) +c C_{F} \int \limits _{0}^{t}\text {e}^{-\lambda (t-s)} \Vert u(s)\Vert _X~{\text {d}}s. \end{aligned}$$
(3.10)

Applying Gronwall’s lemma to \(\text {e}^{\lambda t}\Vert u(t)\Vert _X\), we infer that

$$\begin{aligned} \text {e}^{\lambda t }\Vert u(t)\Vert _X \le \text {e}^{\lambda t } \gamma (t) +cC_{F} \int \limits _{0}^{t}\text {e}^{\lambda s}\gamma (s) \text {e}^{cC_{F}(t-s)}~{\text {d}}s, \end{aligned}$$

and multiplying with \(\text {e}^{-\lambda t}\), we obtain

$$\begin{aligned} \Vert u(t)\Vert _X \le \gamma (t) +cC_{F} \int \limits _{0}^{t} \text {e}^{-(\lambda -cC_{F})(t-s)} \gamma (s) ~{\text {d}}s. \end{aligned}$$
(3.11)

This estimate allows us to determine the pullback absorbing set. First, note that all terms in \(\gamma \) are well defined for the limit \(t\rightarrow \infty \), due to the subexponential growth of \(\Vert \omega (t)\Vert _{X_{\beta }}\), and consequently,

$$\begin{aligned} \gamma (t)&\le e ^{-\lambda t} \Bigg (c \Vert u_{0}\Vert _X +{\hat{c}} \sigma \Vert \omega (-t)\Vert _{X_{\beta }} \Bigg ) \nonumber \\&\qquad + \frac{c\overline{C}_{F}}{\lambda } +\sigma {\widetilde{C}}_{1-\beta }\int \limits _{-\infty }^{0} \frac{\text {e}^{\lambda r}}{(-r)^{1-\beta }} \Vert \omega (r)\Vert _{X_{\beta }}{\text {d}}r<\infty . \end{aligned}$$
(3.12)

We now focus on the second term in (3.11),

$$\begin{aligned}&\int \limits _{0}^{t} \text {e}^{-(\lambda -cC_{F})(t-s)}\gamma (s)~{\text {d}}s\\&\quad \le \int \limits _{0}^{t} \text {e}^{-(\lambda -cC_{F})(t-s)} \Bigg ( c\text {e}^{-\lambda s}\Vert u_{0}\Vert _X +{\hat{c}}\sigma \text {e}^{-\lambda s} \Vert \omega (-s)\Vert _{X_{\beta }} + \frac{c\overline{C}_{F}}{\lambda }\\&\qquad + \sigma {\widetilde{C}}_{1-\beta } \int \limits _{-s}^{0} \frac{\text {e}^{\lambda r}}{(-r)^{1-\beta }} \Vert \omega (r)\Vert _{X_{\beta }} {\text {d}}r \Bigg ) ~{\text {d}}s. \end{aligned}$$

The first and the third terms are bounded by

$$\begin{aligned} \int \limits _{0}^{t} \text {e}^{-(\lambda -cC_{F})(t-s)} \text {e}^{-\lambda s}\Vert u_{0}\Vert _X ~{\text {d}}s\le \frac{\text {e}^{-(\lambda -cC_{F})t}}{cC_{F}} \Vert u_{0}\Vert _X \end{aligned}$$

and obviously

$$\begin{aligned} \frac{c\overline{C}_{F}}{\lambda }\int \limits _{0}^{t} \text {e}^{-(\lambda -cC_{F})(t-s)}~{\text {d}}s \le \frac{c\overline{C}_{F}}{\lambda (\lambda - cC_{F})}. \end{aligned}$$

The second one can be estimated by

$$\begin{aligned} \sigma {\hat{c}}\text {e}^{-(\lambda -cC_{F})t}\int \limits _{0}^{t} \text {e}^{-cC_{F}s} \Vert \omega (-s)\Vert _{X_{\beta }}~{\text {d}}s= \sigma {\hat{c}} \text {e}^{-(\lambda -cC_{F})t}\int \limits _{-t}^{0} \text {e}^{cC_{F}s} \Vert \omega (s)\Vert _{X_{\beta }}~{\text {d}}s. \end{aligned}$$

Finally, for the last term, we observe that

$$\begin{aligned}&\sigma {\widetilde{C}}_{1-\beta }\int \limits _{0}^{t} \text {e}^{-(\lambda -cC_{F})(t-s)} \int \limits _{-s}^{0} \frac{\text {e}^{\lambda r}}{(-r)^{1-\beta }} \Vert \omega (r)\Vert _{X_{\beta }}{\text {d}}r~{\text {d}}s\\&\quad \le \sigma {\widetilde{C}}_{1-\beta }\int \limits _{0}^{t} \text {e}^{-(\lambda -cC_{F})(t-s)} ~{\text {d}}s\int \limits _{-\infty }^{0} \frac{\text {e}^{\lambda r}}{(-r)^{1-\beta }} \Vert \omega (r)\Vert _{X_{\beta }}{\text {d}}r\\&\quad =\frac{\sigma {\widetilde{C}}_{1-\beta }}{\lambda -cC_F}\int \limits _{-\infty }^{0} \frac{\text {e}^{\lambda r}}{(-r)^{1-\beta }} \Vert \omega (r)\Vert _{X_{\beta }}{\text {d}}r. \end{aligned}$$

In conclusion, using all the previous estimates in (3.11) we have

$$\begin{aligned} \begin{aligned} \Vert u(t)\Vert _X&\le e ^{-\lambda t} \Bigg (c \Vert u_{0}\Vert _X + \sigma {\hat{c}} \Vert \omega (-t)\Vert _{X_{\beta }} \Bigg ) \\&\qquad + \frac{c\overline{C}_{F}}{\lambda } +\sigma {\widetilde{C}}_{1-\beta }\int \limits _{-\infty }^{0} \frac{\text {e}^{\lambda r}}{(-r)^{1-\beta }} \Vert \omega (r)\Vert _{X_{\beta }}{\text {d}}r\\&\qquad + c\text {e}^{-(\lambda -cC_{F})t} \Vert u_{0}\Vert _X + \frac{c^2C_{F} \overline{C}_{F}}{\lambda (\lambda -cC_{F})} \\&\qquad + cC_{F}\sigma {\hat{c}} \text {e}^{-(\lambda -cC_{F})t}\int \limits _{-t}^{0} \text {e}^{cC_{F}s}\Vert \omega (s)\Vert _{X_{\beta }}~{\text {d}}s\\&\quad + \frac{cC_{F}{\sigma }{\widetilde{C}}_{1-\beta }}{\lambda -cC_{F}} \int \limits _{-\infty }^{0}\frac{\text {e}^{\lambda s}}{(-s)^{1-\beta }}\Vert \omega (s)\Vert _{X_{\beta }}~{\text {d}}s\\&\le e ^{-(\lambda -cC_{F})t} \Bigg (2c \Vert u_{0}\Vert _X + \sigma {\hat{c}} \Vert \omega (-t)\Vert _{X_{\beta }} \Bigg ) +\frac{c \overline{C}_F}{\lambda - cC_{F}}\\&\quad + cC_{F}\sigma {\hat{c}} \int \limits _{-\infty }^{0} \text {e}^{cC_{F}s}\Vert \omega (s)\Vert _{X_{\beta }}~{\text {d}}s + \frac{\sigma {\widetilde{C}}_{1-\beta }\lambda }{\lambda -cC_{F}} \int \limits _{-\infty }^{0}\frac{\text {e}^{\lambda s}}{(-s)^{1-\beta }}\Vert \omega (s)\Vert _{X_{\beta }}~{\text {d}}s. \end{aligned} \end{aligned}$$
(3.13)

Using (2.2), we infer that \(B(\omega ):=B(0,\rho (\omega )+\delta )\) for some \(\delta >0\), where

$$\begin{aligned} \rho (\omega )&:= \frac{c \overline{C}_F}{\lambda - cC_{F}} + cC_{F}\sigma {\hat{c}} \int \limits _{-\infty }^{0} \text {e}^{cC_{F}s}\Vert \omega (s)\Vert _{X_{\beta }}~{\text {d}}s\nonumber \\&\qquad + \frac{\sigma {\widetilde{C}}_{1-\beta }\lambda }{\lambda -cC_{F}} \int \limits _{-\infty }^{0} \frac{\text {e}^{\lambda s}}{(-s)^{1-\beta }}\Vert \omega (s)\Vert _{X_{\beta }}~{\text {d}}s \end{aligned}$$

is a pullback absorbing set for our random dynamical system. This expression is natural, since we can immediately see the influence of the linear part, nonlinear term and noise intensity. The previous integrals are well defined due to the subexponential growth of \(\omega \). More precisely, the set of all \(\omega \in \Omega \) that have subexponential growth is invariant and has full measure. The temperedness of the absorbing set can be verified as in [5, Lem. 3.7]. \(\square \)

3.2 Existence and finite fractal dimension of random attractors

We now apply Theorem 2.12 to deduce the existence of nonautonomous random exponential attractors for the random dynamical system \(\varphi \).

Theorem 3.8

For every \(\nu \in (0,\frac{1}{2})\) and \(\eta \in (0,1)\), the random dynamical system \(\varphi \) generated by (3.4) has a nonautonomous random pullback exponential attractor \(\mathcal {M}^{\nu ,\eta }\), and its fractal dimension is bounded by

$$\begin{aligned} \sup _{t\in \mathbb {R}}\text {dim}_f(\mathcal {M}^{\nu ,\eta }(t,\omega ))\le \log _{\frac{1}{2\nu }}\left( N_{\frac{\nu }{\kappa }}^X(B^{X_\eta }(0,1))\right) , \end{aligned}$$

where

$$\begin{aligned} \kappa ={\widetilde{C}}_\eta +C_F{\widetilde{C}}_\eta c\text {e}^{\frac{cC_F}{\lambda }}\int _0^{{\tilde{t}}} \frac{\text {e}^{-\lambda ({\tilde{t}}-s)}}{({\tilde{t}}-s)^\eta }~{\text {d}}s, \end{aligned}$$

and \({\tilde{t}}>0\) is arbitrary.

We remark that \(\kappa \) is determined by the constant \({\widetilde{C}}_\eta \) in Lemma 2.15, the Lipschitz constant \(C_F\) of F and the constants c and \(\lambda \) in (2.5).

Proof

We verify the hypotheses in Theorem 2.12.

\((H_1)\):

By Assumptions 2 (AC), this property holds for the spaces X and \(Y=X_\eta \), for arbitrary \(\eta \in (0,1).\)

\((H_2)\):

This was shown in Lemma 3.7. In fact, \(B(\omega )=B(0,\rho (\omega )+\delta )\), for some \(\delta >0\), is pullback \(\mathcal {D}\)-absorbing and \(B\in \mathcal {D}.\) Moreover, the absorbing time fulfills the condition in \((H_1)\). In fact, let \(D\in \mathcal {D}\), then

$$\begin{aligned} T_{D,\omega }=\inf \left\{ {\tilde{t}}\ge 0 : \text {e}^{-(\lambda -cC_F)t}\Big (2c\sup _{\zeta \in D(\theta _{-t}\omega )} \Vert \zeta \Vert +\sigma {\hat{c}}\Vert \omega (-t)\Vert _{X^\beta }\Big )<\delta \quad \forall t\ge {\tilde{t}}\right\} , \end{aligned}$$

see the estimate in (3.13).

\((H_4)\):

We verify the Lipschitz continuity of \(\varphi \) in B. To this end, let \(u_0,v_0\in B(\omega )\), \(\omega \in \Omega \). For the difference of the corresponding solutions, we obtain

$$\begin{aligned}&\Vert \varphi ( t, \omega , u_0)-\varphi (t, \omega , v_0)\Vert _{X}\\&\quad \le \Vert {\widetilde{U}}( t,\omega )(u_0-v_0)\Vert _{X} +\int _0^{t}\Vert {\widetilde{U}}( t-s,\theta _{s}\omega ) \big (F(\varphi (s, \omega , u_0))\nonumber \\&\qquad -F(\varphi (s, \omega , v_0))\big )\Vert _{X}~{\text {d}}s\\&\quad \le c \text {e}^{-\lambda t}\Vert u_0-v_0\Vert _X+\int _0^{t}c\text {e}^{-\lambda (t-s)} \big \Vert F(\varphi (s, \omega , u_0))-F(\varphi (s, \omega , v_0))\big \Vert _{X}~{\text {d}}s\\&\quad \le c \text {e}^{-\lambda t}\Vert u_0-v_0\Vert _X+cC_F\int _0^{t} \text {e}^{-\lambda (t-s)} \big \Vert \varphi (s, \omega , u_0)-\varphi (s, \omega , v_0)\big \Vert _{X}~{\text {d}}s\\&\quad \le c \Vert u_0-v_0\Vert _X+cC_F\int _0^{t} \text {e}^{-\lambda (t-s)} \big \Vert \varphi (s, \omega , u_0)-\varphi (s, \omega , v_0)\big \Vert _{X}~{\text {d}}s. \end{aligned}$$

Hence, Gronwall’s lemma implies that

$$\begin{aligned} \Vert \varphi ( t, \omega , u_0)-\varphi (t, \omega , v_0)\Vert _{X}\le c \Vert u_0-v_0\Vert _X\text {e}^{cC_F\int _0^t\text {e}^{-\lambda (t-s)}~{\text {d}}s}\le c\text {e}^{\frac{cC_F}{\lambda }} \Vert u_0-v_0\Vert _X. \end{aligned}$$
\((H_3)\):

Finally, we use the Lipschitz continuity in \((H_4)\) to verify the smoothing property for the spaces X and \(Y=X_\eta \). Let \({\tilde{t}}>0\). We estimate the difference of the solutions in the \(X_\eta \)-norm,

$$\begin{aligned}&\Vert \varphi ({\tilde{t}}, \omega , u_0)-\varphi ({\tilde{t}}, \omega , v_0)\Vert _{X_\eta }\\&\quad \le \Vert {\widetilde{U}}({\tilde{t}},\omega )(u_0-v_0)\Vert _{X_\eta }\nonumber \\&\qquad +\int _0^{{\tilde{t}}}\Vert {\widetilde{U}}({\tilde{t}}-s,\theta _{s}\omega ) \big (F(\varphi (s, \omega , u_0))-F(\varphi (s, \omega , v_0))\big )\Vert _{X_\eta }~{\text {d}}s\\&\quad \le \frac{{\widetilde{C}}_\eta }{\tilde{t}^\eta } \text {e}^{-\lambda \tilde{t}}\Vert u_0-v_0\Vert _X+{\widetilde{C}}_\eta \int _0^{\tilde{t}}\frac{\text {e}^{-\lambda ({\tilde{t}}-s)}}{({\tilde{t}}-s)^\eta } \big \Vert F(\varphi (s, \omega , u_0))-F(\varphi (s, \omega , v_0))\big \Vert _{X}~{\text {d}}s\\&\quad \le \frac{{\widetilde{C}}_\eta }{\tilde{t}^\eta } \text {e}^{-\lambda \tilde{t}}\Vert u_0-v_0\Vert _X+C_F{\widetilde{C}}_\eta \int _0^{{\tilde{t}}} \frac{\text {e}^{-\lambda ({\tilde{t}}-s)}}{({\tilde{t}}-s)^\eta } \big \Vert \varphi (s, \omega , u_0)-\varphi (s, \omega , v_0)\big \Vert _{X}~{\text {d}}s\\&\quad \le \frac{{\widetilde{C}}_\eta }{\tilde{t}^\eta }\Vert u_0-v_0\Vert _X+C_F{\widetilde{C}}_\eta c\text {e}^{\frac{cC_F}{\lambda }}\int _0^{{\tilde{t}}} \frac{\text {e}^{-\lambda ({\tilde{t}}-s)}}{({\tilde{t}}-s)^\eta }~{\text {d}}s \Vert u_0-v_0\Vert _X, \end{aligned}$$

where we used the Lipschitz continuity of \(\varphi \) in X in the last step. Hence, the smoothing property holds with the smoothing constant

$$\begin{aligned} \kappa =\frac{{\widetilde{C}}_\eta }{\tilde{t}^\eta } +C_F{\widetilde{C}}_\eta c\text {e}^{\frac{cC_F}{\lambda }}\int _0^{{\tilde{t}}} \frac{\text {e}^{-\lambda s}}{s^\eta }~{\text {d}}s. \end{aligned}$$

The smoothing property holds for any \({\tilde{t}}>0\), and consequently, \((H_3)\) is satisfied.

\(\square \)

An immediate consequence is the existence and finite fractal dimension of the global random attractor.

Corollary 3.9

There exists a unique global random attractor for \(\varphi \), and its fractal dimension is bounded by

$$\begin{aligned} \text {dim}_f(\mathcal {A}(\omega ))\le \inf _{\nu \in (0,\frac{1}{2})}\left\{ \log _{\frac{1}{2\nu }}\left( N_{\frac{\nu }{\kappa }}^X(B^{X_\eta }(0,1))\right) \right\} , \end{aligned}$$

for all \(\eta \in (0,1)\), where \(\kappa \) is the constant given in Theorem 3.8.

The existence of a nonautonomous random exponential attractor implies that the global random attractor exists and that its fractal dimension is finite. We point out that in our particular case, it is, in fact, easier to consider random exponential attractors than to deduce the existence of the global random attractor from Theorem 2.10. In fact, we have shown that a tempered absorbing set exists, but the theorem requires the existence of a compact absorbing set. To show how this can be established and to indicate that the proof is indeed more involved than verifying the hypotheses of Theorem 2.12, we provide the following lemma, even if we do not use it to prove our main results.

Lemma 3.10

Let \(T_B\ge 0\) denote the absorbing time corresponding to the absorbing set B. Then, the set \(K(\omega ):=\overline{\varphi (T_B,\theta _{-T_{B}}\omega , B(\theta _{-T_B}\omega )) }^{X}\) is a compact absorbing set for \(\varphi \).

Proof

The proof is based on compact embeddings of fractional power spaces. Let \(\eta >0\) be such that \(0<\eta <\beta \le 1\). It suffices to derive uniform estimates of the solutions w.r.t. the \(X_{\eta }\)-norm since \(X_{\eta }\) is compactly embedded into X. Let \(u_0\in B(\theta _{-T_B}\omega )\). We observe that

$$\begin{aligned} \Vert \varphi (T_B,\theta _{-T_B}\omega ,u_{0})\Vert _{X_{\eta }}&\le \Vert {\widetilde{U}}(T_B,\theta _{-T_B}\omega )u_{0}\Vert _{X_{\eta }} + \Vert {\widetilde{U}}(T_B,\theta _{-T_B}\omega )\theta _{-T_B}\omega (T_B)\Vert _{X_{\eta }} \\&\quad + \int \limits _{0}^{T_B} \Vert {\widetilde{U}}(T_B-s,\theta _{s-T_B}\omega ) F(\varphi (s,\theta _{-T_B}\omega ,u_{0})) \Vert _{X_{\eta }}~{\text {d}}s \\&\quad +\sigma \int \limits _{0}^{T_B} \Vert {\widetilde{U}}(T_B-s,\theta _{s-T_B}\omega ) A(\theta _{s-T_B}\omega ) \omega (s-T_B) \Vert _{X_{\eta }}~{\text {d}}s. \end{aligned}$$

To estimate these terms, we use that \(u_{0}\in B(\theta _{-T_B}\omega )\) and that B is a pullback absorbing set. The first and second terms yield the following expressions,

$$\begin{aligned} \Vert {\widetilde{U}}(T_B,\theta _{-T_B}\omega )u_{0}\Vert _{X_{\eta }} \le {\widetilde{C}}_\eta \text {e}^{-\lambda T_B} \Vert u_{0}\Vert _X\le {\widetilde{C}}_\eta e ^{-\lambda T_B} (\rho (\theta _{-T_B}\omega )+\delta ), \end{aligned}$$

and

$$\begin{aligned} \Vert {\widetilde{U}}(T_B,\theta _{-T_B}\omega )\omega (-T_B)\Vert _{X_{\eta }}\le {\widetilde{C}}_\eta \frac{{\hat{c}}}{c}\text {e}^{-\lambda T_B}\Vert \omega (-T_B)\Vert _{X_{\beta }}. \end{aligned}$$

For the generalized convolution, we obtain

$$\begin{aligned}&\int \limits _{0}^{T_B} \Vert {\widetilde{U}}(T_B-s,\theta _{s-T_B}\omega ) A(\theta _{s-T_B}\omega ) \omega (s-T_B) \Vert _{X_{\eta }}~{\text {d}}s \\&\quad =\int \limits _{0}^{T_B} \Vert A^{\eta }(\theta _{T_B}\omega ){\widetilde{U}}(T_B-s,\theta _{s-T_B}\omega )A^{1-\beta }(\theta _{s-T_B}\omega )A^{\beta }(\theta _{s-T_B}\omega )\omega (s-T_B) \Vert _{X}~~{\text {d}}s\\&\quad \le {\widetilde{C}}_{1-(\beta -\eta )}\int \limits _{0}^{T_B} \frac{\text {e}^{-\lambda (T_B-s)}}{(T_B-s)^{1-(\beta -\eta )}} \Vert \omega (s-T_B)\Vert _{X_{\beta }} ~{\text {d}}s\\&\quad \le {\widetilde{C}}_{1-(\beta -\eta )} \sup \limits _{s\in [0,T_B]} \Vert \omega (s-T_B)\Vert _{X_{\beta }}\int \limits _{0}^{T_B} \frac{\text {e}^{-\lambda (T_B-s)}}{(T_B-s)^{1-(\beta -\eta )}}~{\text {d}}s<\infty . \end{aligned}$$

Finally, we estimate the drift term,

$$\begin{aligned}&\int \limits _{0}^{T_B} \Vert {\widetilde{U}}(T_B-s,\theta _{T_B-s}\omega ) F(\varphi (s,\theta _{-T_B}\omega ,u_{0})) \Vert _{X_{\eta }}~{\text {d}}s\\&\quad \le {\widetilde{C}}_\eta \overline{C}_{F}\int \limits _{-T_B}^{0}\frac{\text {e}^{\lambda r}}{(-r)^{\eta }}~{\text {d}}r + {\widetilde{C}}_\eta C_{F} \int \limits _{-T_B}^{0} \frac{\text {e}^{\lambda r}}{(-r)^{\eta }} \Vert \varphi (r+T_B,\theta _{-T_B}\omega ,u_{0})\Vert _X~{\text {d}}r\\&\quad \le {\widetilde{C}}_\eta \overline{C}_{F}\int \limits _{-T_B}^{0} \frac{\text {e}^{\lambda r}}{(-r)^{\eta }}~{\text {d}}r + {\widetilde{C}}_\eta C_{F} \int \limits _{-T_B}^{0} \frac{\text {e}^{\lambda r}}{(-r)^{\eta }}(\rho (\theta _r\omega )+\delta )~{\text {d}}r, \end{aligned}$$

where we used that \(u_0\in B(\theta _{-T_B}\omega )\) and the absorbing property of B, i.e.,

$$\begin{aligned} \varphi (r+T_B,\theta _{-T_B}\omega ,u_{0})\subset B(\theta _r\omega ). \end{aligned}$$

We remark that the expressions and \(\omega \)-dependent constants in all estimates are well defined. Collecting the estimates, we finally conclude that

$$\begin{aligned} \Vert \varphi (T_B,\theta _{-T_B}\omega , u_{0})\Vert _{X_{\eta }} \le {\widetilde{C}}(\omega , \eta ,\beta ,\delta , T_B)<\infty , \end{aligned}$$

for some constant \({\widetilde{C}}(\omega , \eta ,\beta ,\delta , T_B).\) The compact embedding \(X_{\eta }\hookrightarrow X\) implies that the set \(\varphi (T_B,\theta _{-T_B}\omega , B(\theta _{-T_{B}}\omega ))\) is relatively compact in X. Hence, \(K(\omega )\) is compact which proves the statement. \(\square \)

4 Examples

We provide examples of differential operators A that satisfy our assumptions in the previous sections. The canonical examples are uniformly elliptic operators with random, time-dependent coefficients. Such operators have been investigated in the context of SPDEs, see [25, 26] and the references therein. In the framework of random dynamical systems, several properties of the evolution system generated by such operators have been analyzed, including results on the spectral theory and principal Lyapunov exponents [21, 22, 29], stable and unstable manifolds and multiplicative ergodic theorems [6, 20].

We consider the random partial differential operators in the Banach space \(X:=L^{p}(G)\) for \(2\le p<\infty \), where \(G\subset \mathbb {R}^{n}\) is a bounded open domain with smooth boundary \(\partial G\) (see also [24, Sec. 7.6]). We recall that in our case, the differential operator \(A(t,\omega )\) depends on time \(t\in \mathbb {R}\) and the random parameter \(\omega \in \Omega \) in the following way

$$\begin{aligned} A(t,\omega )=A(\theta _t\omega ), \qquad t\in \mathbb {R},\omega \in \Omega . \end{aligned}$$

Example 4.1

Let \(m\in \mathbb {N}\) and A be the random partial differential operator

$$\begin{aligned} A(\theta _t\omega ,x,{\text {D}})=\sum \limits _{|k|\le 2m}a_{k}(\theta _t\omega ,x){\text {D}}^{k},\qquad t\in \mathbb {R},~ \omega \in \Omega ,~ x\in G, \end{aligned}$$

with homogeneous Dirichlet boundary conditions,

$$\begin{aligned} {\text {D}}^{k}u=0\quad \text {on }\partial G \qquad \text { for } |k|<m. \end{aligned}$$

We make the following assumptions on the coefficients.

  1. 1.

    The operator A is uniformly strongly elliptic in G, i.e., there exists a constant \(\overline{c}>0\) such that

    $$\begin{aligned} (-1)^m \sum \limits _{|k|=2m}a_{k}(\theta _t\omega ,x)\xi _{k}\ge \overline{c}|\xi |^{2m}\quad \text{ for } \text{ all } t\in \mathbb {R},~ x\in \overline{G},~ \xi \in \mathbb {R}^{n}. \end{aligned}$$
  2. 2.

    The coefficients form a stochastic process \((t,\omega )\mapsto a_{k}(\theta _{t}\omega ,\cdot {})\in C^{2m}(\overline{G})\) which has Hölder continuous trajectories, i.e., there exists \(\nu \in (0,1]\) such that

    $$\begin{aligned} |a_{k}(\theta _{t}\omega ,x)-a_{k}(\theta _{s}\omega ,x)|\le \overline{c}_{1}|t-s|^{\nu }\quad \text{ for } \text{ all } t, s\in \mathbb {R},~ x\in \overline{G},~ |k|\le 2m, \end{aligned}$$
    (4.1)

    for some \(\overline{c}_{1}>0\).

  3. 3.

    The constants \(\overline{c}\) and \(\overline{c}_1\) are uniformly bounded with respect to \(\omega \in \Omega \) and \(x\in G\).

We define the \(L^{p}\)-realization of \(A(\cdot {},\cdot , D)\) by

$$\begin{aligned}&A_{p}(\theta _t\omega )u:=A(\theta _t\omega ,x,{\text {D}})u\qquad \text{ for } u\in \mathcal {D}_A, \text{ where } \nonumber \\&\mathcal {D}_A:=D(A_{p}(\theta _t\omega ))=W^{2m,p}(G)\cap W^{m,p}_{0}(G). \end{aligned}$$
(4.2)

We verify now Assumptions (A0)–(A3) and (AC). It is well known that \(A_{p}(\omega )\) generates a compact analytic semigroup in \(L^p(G)\) for every \(\omega \in \Omega \). Moreover, the mapping \(\omega \mapsto A_p(\omega )v\) is measurable for every smooth function \(v\in C^{\infty }(G)\). This entails the measurability of the mapping \(\omega \mapsto A_p(\omega )v\) for every \(v\in L^{p}(G)\). Consequently, the Assumptions (A0)-(A2) and (AC) are satisfied. We only need to show the Hölder continuity of the mapping \(t\mapsto A_{p}(\theta _{t}\omega )\) to verify (A3).

To this end, let \(v\in \mathcal {D}_A\) and \(t,s\in \mathbb {R}\). Then, we have

$$\begin{aligned}&\Vert A_{p}(\theta _{t}\omega )-A_{p}(\theta _{s}\omega )\Vert ^{p}_{\mathcal {L}(\mathcal {D}_{A},X)}\nonumber \\&\qquad =\sup \limits _{v\in \mathcal {D}_A,\Vert v\Vert =1}\Vert (A_{p}(\theta _{t}\omega )-A_{p}(\theta _{s}\omega ))v\Vert _{X}^{p}\\&\qquad = \sup \limits _{v\in \mathcal {D}_A,\Vert v\Vert =1} \Big \Vert \sum \limits _{|k|\le 2m}(a_{k}(\theta _{t}\omega ,x)-a_{k}(\theta _{s}\omega ,x)){\text {D}}^{k}v\Big \Vert _{X}^{p}. \end{aligned}$$

Furthermore, we estimate

$$\begin{aligned}&\Big \Vert \sum \limits _{|k|\le 2m}(a_{k}(\theta _{t}\omega ,x)-a_{k}(\theta _{s}\omega ,x)){\text {D}}^{k}v\Big \Vert _{X}^{p}\\&\quad =\ \int \limits _{G}\Bigg |\sum \limits _{|k|\le 2m} (a_{k}(\theta _{t}\omega ,x) -a_{k}(\theta _{s}\omega ,x))D^{k}v(x)\Bigg |^{p}~{\text {d}}x\\&\quad \le \ \int \limits _{G} C_{p} \sum \limits _{|k|\le 2m} |(a_{k}(\theta _{t}\omega ,x)-a_{k}(\theta _{s}\omega ,x))D^{k} v(x) |^{p}~{\text {d}}x\\&\quad \le \ C_{p}\sum \limits _{|k|\le 2m}\sup \limits _{x\in G} |a_{k}(\theta _{t}\omega ,x)-a_{k}(\theta _{s}\omega ,x)|^{p} \int \limits _{G}|D^{k}v(x)|^{p}~{\text {d}}x\\&\quad \le \ C_{p}\sum \limits _{|k|\le 2m}\sup \limits _{x\in G} |a_{k}(\theta _{t}\omega ,x)-a_{k}(\theta _{s}\omega ,x)|^{p} \Vert v\Vert ^{p}_{W^{2m,p}}\\&\quad \le \ C_{p} ||v||^{p}_{W^{2m,p}}\sum \limits _{|k|\le 2m}\Vert a_{k}(\theta _{t}\omega , \cdot )-a_{k}(\theta _{s}\omega , \cdot )\Vert ^{p}_{C^{2m}(\overline{G})}. \end{aligned}$$

Therefore, the Hölder continuity of \((t,\omega )\mapsto a_{k}(\theta _{t}\omega ,\cdot {})\) justifies (2.4).

The following example is similar to [26, Example 6.2] and [32, Section 10.2]. In our case, the operators satisfy the structural assumption (3.1) and the domains are assumed to be constant with respect to time t and \(\omega \in \Omega \). For random nonautonomous second-order operators of this type, see [21, Sec. 3] and [22].

Example 4.2

Let \(m\in \mathbb {N}\) and A be the differential operator

$$\begin{aligned} A(\theta _t\omega ,x,{\text {D}}):=\sum \limits _{|k_1|,|k_2|\le m}{\text {D}}^{k_1}(a_{k_1,k_2}(\theta _t\omega ,x){\text {D}}^{k_2}),\qquad \omega \in \Omega ,~ x\in G,~t\in \mathbb {R}, \end{aligned}$$

with homogeneous Dirichlet or Neumann boundary conditions. Similarly as in the previous example and [26], we make the following assumptions on the coefficients.

  1. 1.

    We assume that the coefficients \(a_{k_1,k_2}\) are bounded and symmetric. More precisely, there exists a constant \(K\ge 1\) such that

    $$\begin{aligned} |a_{k_1,k_2}(\theta _t\omega ,x)|\le K\qquad \text {for all}\ |k_1|,|k_2|\le m,\ t\in \mathbb {R}, x\in G, \omega \in \Omega \end{aligned}$$

    and

    $$\begin{aligned} a_{k_1,k_2}(\cdot ,\cdot )=a_{k_2,k_1}(\cdot ,\cdot )~~ \text{ for } ~|k_1|,|k_2|\le m. \end{aligned}$$

    Furthermore, the mapping \(t\mapsto {\text {D}}^{k} a_{k_1,k_2}(\theta _t\omega ,x)\) is continuous for \(|k|,|k_1|,|k_2|\le m\), \(\omega \in \Omega \) and \(x\in G\).

  2. 2.

    The operator A is uniformly elliptic in G, i.e., there exists a constant \({\bar{c}}>0\) such that

    $$\begin{aligned} \sum \limits _{|k_1|=|k_2|=m}a_{k_1,k_2}(\theta _t\omega ,x)\xi _{k_1}\xi _{k_2}\ge \overline{c}|\xi |^{2m}\qquad \text{ for } \text{ all } t\in \mathbb {R},~x\in \overline{G},~ \xi \in \mathbb {R}^{n}. \end{aligned}$$
  3. 3.

    The coefficients form a stochastic process \((t,\omega )\mapsto a_{k_1,k_2}(\theta _{t}\omega ,\cdot {})\in C^{m}(\overline{G})\) with Hölder continuous trajectories as in (4.1). This means that there exists \(\nu \in (0,1]\) such that

    $$\begin{aligned} |a_{k_1,k_2}(\theta _{t}\omega ,x)-a_{k_1,k_2}(\theta _{s}\omega ,x)|\le \overline{c}_{2}|t-s|^{\nu }\quad \text{ for } \text{ all } t\in \mathbb {R},~ x\in \overline{G},~ |k_1|, |k_2|\le m, \end{aligned}$$

    for some constant \(\overline{c}_{2}>0\).

  4. 4.

    All constants K, \(\overline{c}\) and \(\overline{c}_2\) are uniformly bounded with respect to \(\omega \in \Omega \) and \(x\in G\).

One can define the \(L^{p}\)-realization \(A_p\) of \(A(\cdot ,\cdot ,{\text {D}})\) as in (4.2). Moreover, one can verify as in Example 4.1 that the Assumptions (A0)–(A3) are satisfied.

For instance, (A3) follows from the estimate

$$\begin{aligned}&\Vert A_{p}(\theta _{t}\omega )-A_{p}(\theta _{s}\omega )\Vert ^{p}_{\mathcal {L}(\mathcal {D}_{A},X)} =\sup \limits _{v\in \mathcal {D}_A,\Vert v\Vert =1}\Vert (A_{p}(\theta _{t}\omega )-A_{p}(\theta _{s}\omega ))v\Vert _{X}^{p}\\&\quad =\Bigg \Vert \sum \limits _{|k_1|,|k_{2}|\le m} {\text {D}}^{k_1}(a_{k_1,k_2}(\theta _{t}\omega ,x)-a_{k_1,k_2}(\theta _{s}\omega ,x)){\text {D}}^{k_2}v\Bigg \Vert ^{p}_{X}\\&\quad =\int \limits _{G}\left| \sum \limits _{|k_1|,|k_2|\le m} {\text {D}}^{k_1} (a_{k_1,k_2}(\theta _{t}\omega ,x)-a_{k_1,k_2}(\theta _{s}\omega ,x)){\text {D}}^{k_2}v(x) \right| ^{p}~{\text {d}}x\\&\quad \le C_{p}\sum \limits _{|k_1|,|k_2|\le m} \int \limits _{G} |{\text {D}}^{k_1}(a_{k_1,k_2}(\theta _{t}\omega ,x)-a_{k_1,k_2}(\theta _{s}\omega ,x)){\text {D}}^{k_2}v(x)|^{p}~{\text {d}}x\\&\quad \le C_{p} \sum \limits _{|k_1|,|k_2|\le m} \sup \limits _{x\in G} |{\text {D}}^{k_1}(a_{k_1,k_2}(\theta _{t}\omega ,x)-a_{k_1,k_2}(\theta _{s}\omega ,x))|^{p}\int \limits _{G}|{\text {D}}^{k_2}v(x)|^{p}~{\text {d}}x \\&\quad \le C_{p} \Vert v\Vert ^{p}_{W^{2m,p}}\sum \limits _{|k_1|,|k_2|\le m} \Vert a_{k_1,k_2}(\theta _{t}\omega ,\cdot )-a_{k_1,k_2}(\theta _{s}\omega ,\cdot )\Vert ^{p}_{C^{m}(\overline{G})}. \end{aligned}$$

Example 4.3

Another widely studied example is operators of the form \(A:=\Delta + a(\theta _t\omega )\), where \(\Delta \) denotes the Laplace operator with homogeneous Dirichlet boundary conditions in a bounded domain \(G\subset \mathbb {R}^n\) and \(a(\theta _t\omega )\) can be viewed as a time-dependent random potential [27]. Here, the function \(a:\Omega \rightarrow (0,\infty )\) is measurable and the mapping \((t,\omega )\mapsto a(\theta _t\omega )\) is assumed to be Hölder continuous. For instance, in mathematical models for populations dynamics such random potentials can be used to quantify environmental fluctuations, see, e.g., [18] and the references specified therein. Several PDEs where the linear part has this structure have been investigated, see, e.g., [27] where a random nonautonomous version of the Fisher–KPP equation is considered. The asymptotic dynamics of the solutions as t tends to infinity has been characterized depending on the behavior of a.