1 Introduction

The classical Yamada–Watanabe theorem [23] for finite dimensional Brownian stochastic differential equations (SDEs) states that weak existence and strong (i.e. pathwise) uniqueness implies strong existence and weak uniqueness (i.e. uniqueness in law). Jacod [9] lifted this result to SDEs driven by semimartingales and extended it by showing that weak existence and strong uniqueness is equivalent to strong existence and weak joint uniqueness, i.e. uniqueness of the joint law of the solution process and its random driver.

In view of Jacod’s theorem, it is an interesting and natural question whether the converse direction in the classical Yamada–Watanabe theorem holds, i.e. whether strong existence and weak uniqueness imply weak existence and strong uniqueness. This implication is nowadays often called the dual Yamada–Watanabe theorem. For finite dimensional Brownian SDEs, A.S. Cherny [1, 2] answered this question affirmatively by proving that weak uniqueness is equivalent to weak joint uniqueness.

More recently, Cherny’s result and the dual Yamada–Watanabe theorem have been generalized to several infinite dimensional frameworks. In [15, 21] the theorems were established for mild solutions to semilinear stochastic partial differential equations (SPDEs) and in [17, 18] for the variational framework.

In this short article we prove Cherny’s result for analytically weak solutions to the Banach space valued semilinear SPDE

$$\begin{aligned} d X_t = (AX_t + \mu _t (X))dt + \sigma _t(X) d W_t, \quad X_0 = x_0, \end{aligned}$$
(1.1)

where A is a densely defined operator and \(\mu \) and \(\sigma \) are progressively measurable processes on the path space of continuous functions. Furthermore, we deduce the dual theorem for our framework.

To the best of our knowledge, these results are new and extend previous ones in several directions. For instance, we study Banach space valued equations, while in [21] only Hilbert space valued equations are considered, and allow non-anticipating coefficients, which are not covered in [15]. In particular, as we work with analytically weak solutions instead of mild solutions, we require no geometric assumptions on the underlying Banach space and only minimal assumptions on the linearity A.

The basic strategy of our proof, which is borrowed from the finite dimensional case and also used in [15, 17, 18]Footnote 1, is to construct an infinite dimensional Brownian motion V, independent of X, such that the noise W can be recovered from the solution process X and V. The technical challenge in this argument is the proof for the independence of X and V. Cherny’s proof for this used additional randomness, an enlargement of filtration and a conditioning argument. In [15, 17, 18] these ideas have been adapted to the respective infinite dimensional frameworks. Our approach is different and appears to us more straightforward and less technical. Namely, we transfer ideas from [3] for one dimensional SDEs with jumps to our continuous infinite dimensional setting and establish the independence with arguments based on cylindrical martingale problems. More precisely, we provide martingale characterizations for weak solutions to SPDEs and infinite dimensional Brownian motion, then show that the quadratic variations of the corresponding test martingales vanish and finally deduce the desired independence with help of changes of measure. In comparison with Cherny’s method, we work directly with X and V without introducing more randomness. Furthermore, once the martingale characterizations are established, the arguments are quite elementary.

The paper is structured as follows: In Sect. 2 we introduce our setting and state our main results: Theorem 2.3 and Corollary 2.5. At the end of Sect. 2 we shortly comment on possible applications of our results. The proof of Theorem 2.3 is given in Sect. 3. To make the article as self-contained as possible, we added Appendix A, where we collect some technical facts needed in our proofs.

Let us end the introduction with a short comment on notation and terminology: We mainly follow the standard references [4, 13]. A detailed construction and standard properties of the stochastic integral can also be found in [15].

2 The Setting and Main Results

Let \({U}\) be a real (separable) Banach space with separable topological dual \({U}^*\) and let H be a real separable Hilbert space. We denote the corresponding norms of U and H by \(\Vert \cdot \Vert _{U}\) and \(\Vert \cdot \Vert _H\) and the scalar product of H by \(\langle \cdot , \cdot \rangle _H\). As usual, the topological dual of H is identified with H via the Riesz representation. Moreover, we write

$$\begin{aligned} \langle y, y^* \rangle _{U}\triangleq y^* (y), \quad (y, y^*) \in {U}\times U^*. \end{aligned}$$

The space of bounded linear operators \(H \rightarrow {U}\) is denoted by \(L \triangleq L (H, {U})\) and the corresponding operator norm is denoted by \(\Vert \cdot \Vert _L\). We define \(\mathbb {C}\triangleq C(\mathbb {R}_+, {U})\) to be the space of continuous functions \(\mathbb {R}_+ \rightarrow {U}\). Let \(\mathsf {X}= (\mathsf {X}_t)_{t \ge 0}\) be the coordinate process on \(\mathbb {C}\), i.e. \(\mathsf {X}(\omega ) = \omega \) for \(\omega \in \mathbb {C}\), and set \(\mathcal {C} \triangleq \sigma (\mathsf {X}_t, t \in \mathbb {R}_+)\) and \(\mathbf {C}\triangleq (\mathcal {C}_t)_{t \ge 0}\), where \(\mathcal {C}_t \triangleq \bigcap _{s > t} \sigma (\mathsf {X}_u, u \in [0, s])\) for \(t \in \mathbb {R}_+\).

Let us shortly comment on the driving noise of the SPDEs under consideration and on stochastic integration. We call a family \(W \triangleq (\beta ^k)_{k \in \mathbb {N}}\) of independent one dimensional standard Brownian motions a standard \(\mathbb {R}^\infty \)-Brownian motion. It is well-known (see, e.g. [13, Chapter 2]) that any standard \(\mathbb {R}^\infty \)-Brownian motion can be seen as a trace class Brownian motion in another Hilbert space: Let J be a one-to-one Hilbert–Schmidt embedding of H into another separable Hilbert space \((\overline{H}, \Vert \cdot \Vert _{\overline{H}}, \langle \cdot , \cdot \rangle _{\overline{H}})\) and let \((e_k)_{k \in \mathbb {N}}\) be an orthonormal basis of H. The formula

$$\begin{aligned} \overline{W} \triangleq \sum _{k = 1}^\infty \beta ^k J e_k \end{aligned}$$

defines a trace class \(\overline{H}\)-valued Brownian motion with covariance \(J J^*\). Conversely, any trace class \(\overline{H}\)-valued Brownian motion with covariance \(J J^*\) has such a series representation. Let \(\sigma = (\sigma _t)_{t \ge 0}\) be an H-valued progressively measurable process such that a.s.

$$\begin{aligned} \int _0^t \Vert \sigma _s\Vert ^2_H ds < \infty , \quad t \in \mathbb {R}_+. \end{aligned}$$
(2.1)

Then, \(\widetilde{\sigma } \triangleq \langle \sigma , \cdot \rangle _H\) defines a progressively measurable process with values in \(L_2 (H, \mathbb {R})\), the space of Hilbert–Schmidt operators \(H \rightarrow \mathbb {R}\), and \(\Vert \widetilde{\sigma }\Vert _{L_2(H, \mathbb {R})} = \Vert \sigma \Vert _H\). The stochastic integral of \(\widetilde{\sigma }\) w.r.t. a standard \(\mathbb {R}^\infty \)-Brownian motion W is defined by

$$\begin{aligned} \int _0^\cdot \langle \sigma _s, d W_s\rangle _H \equiv \int _0^\cdot \widetilde{\sigma }_s d W_s \triangleq \int _0^\cdot \widetilde{\sigma }_s J^{-1} d \overline{W}_s \equiv \int _0^\cdot \langle \sigma _s, J^{-1} d \overline{W}_s \rangle _H, \end{aligned}$$

where the stochastic integrals on the r.h.s. are defined in the classical manner (see, e.g. [13, Chapter 2]). We stress that this definition of the stochastic integral is independent of the choice of \(\overline{H}\) and J. It is also well-known (see, e.g. [4, Section 4.1.2]) that a standard \(\mathbb {R}^\infty \)-Brownian motion can be seen as a cylindrical Brownian motion \(\{B (x) :x \in H\}\) defined by the formula

$$\begin{aligned} B (x) \triangleq \sum _{k = 1}^\infty \langle x, e_k\rangle _H \beta ^k, \quad x \in H. \end{aligned}$$

Conversely, any cylindrical Brownian motion has such a series representation. For a simple H-valued process \(\sigma = \sum _{k = 1}^m f^k x^k\), where \(f^k\) are bounded real-valued progressively measurable processes and \(x^k \in H\), the stochastic integral of \(\sigma \) w.r.t. B can be defined by

$$\begin{aligned} \int _0^\cdot \langle \sigma _s, d B_s \rangle _H \triangleq \sum _{k = 1}^m \int _0^\cdot f^k_s d B_s (x^k), \end{aligned}$$

where the stochastic integrals on the r.h.s. are classical stochastic integrals w.r.t. one dimensional continuous local martingales. This definition extends to more general integrands by approximation, see [14] or [15]. In particular, for any H-valued progressively measurable process \(\sigma = (\sigma _t)_{t \ge 0}\) satisfying (2.1) it holds that

$$\begin{aligned} \int _0^\cdot \langle \sigma _s, d W_s\rangle _H = \int _0^\cdot \langle \sigma _s, J^{-1} d \overline{W}_s\rangle _H = \int _0^\cdot \langle \sigma _s, d B_s\rangle _H. \end{aligned}$$

In the following we fix \(\overline{H}\) and J and identify the law of W with the law of \(\overline{W}\) seen as a probability measure on the canonical space of continuous functions \(\mathbb {R}_+ \rightarrow \overline{H}\) equipped with the \(\sigma \)-field generated by the corresponding coordinate process (which is its Borel \(\sigma \)-field when endowed with the local uniform topology).

The input data for the SPDE (1.1) is the following:

  • Two processes defined on the filtered space \((\mathbb {C}, \mathcal {C}, \mathbf {C})\): An \({U}\)-valued progressively measurable process \(\mu =(\mu _t)_{t \ge 0}\) and an L-valued progressively measurable process \(\sigma = (\sigma _t)_{t \ge 0}\), i.e. \(\sigma h\) is progressively measurable for every \(h \in H\).

  • A set \(\mathcal {I} \in \mathcal {C}\) such that for all \(\omega \in \mathcal {I}\) the following holds:

    $$\begin{aligned} \int _0^t \Vert \mu _s (\omega )\Vert _{U}ds + \int _0^t \Vert \sigma _s (\omega )\Vert _{L}^2 ds < \infty , \quad t \in \mathbb {R}_+. \end{aligned}$$
  • A densely defined operator \(A :D(A) \subseteq {U}\rightarrow {U}\) with adjoint \(A^* :D(A^*) \subseteq {U}^* \rightarrow {U}^*\) whose domain \(D(A^*)\) is sequentially weak\(^*\) dense in \({U}^*\).

  • An initial value \(x_0 \in {U}\).

Remark 2.1

Often enough \({U}\) is itself a Hilbert space, or at least a reflexive Banach space, and A is the generator of a \(C_0\)-semigroup on \({U}\). In these cases A and \(A^*\) are densely defined and in particular \(D(A^*)\) is sequentially weak\(^*\) dense.

In the following definition we introduce analytically and probabilistically weak solutions to the SPDE (1.1) and two weak uniqueness concepts.

Definition 2.2

  1. (i)

    We call \((\mathbb {B}, W)\) a driving system, if \(\mathbb {B}= (\Omega , \mathcal {F}, (\mathcal {F}_t)_{t \ge 0}, \mathbb {P})\) is a filtered probability space with right-continuous and complete filtration which supports a standard \(\mathbb {R}^\infty \)-Brownian motion W.

  2. (ii)

    We call \((\mathbb {B}, W, X)\) a weak solution to the SPDE (1.1), if \((\mathbb {B}, W)\) is a driving system and X is a continuous \({U}\)-valued adapted process on \(\mathbb {B}\) such that a.s. \(X \in \mathcal {I}\) and for all \(y^* \in D(A^*)\) a.s.

    $$\begin{aligned} \begin{aligned} \langle X, y^*\rangle _{U}= \langle x_0&, y^*\rangle _{U}+ \int _0^\cdot \langle X_s, A^*y^*\rangle _{U}ds \\&+ \int _0^\cdot \langle \mu _s(X), y^*\rangle _{U}ds + \int _0^\cdot \langle \sigma _s (X)^* y^*, d W_s \rangle _{H}. \end{aligned} \end{aligned}$$
    (2.2)

    The process X is called a solution process on the driving system \((\mathbb {B}, W)\).

  3. (iii)

    We say that weak (joint) uniqueness holds for the SPDE (1.1), if for any two weak solutions \((\mathbb {B}^1, W^1, X^1)\) and \((\mathbb {B}^2, W^2, X^2)\) the laws of \(X^1\) and \(X^2\) (the laws of \((X^1, W^1)\) and \((X^2, W^2)\)) coincide. The law of a solution process is called a solution measure.

Our main result is the following:

Theorem 2.3

Weak uniqueness holds if and only if weak joint uniqueness holds.

The proof of this theorem is given in Sect. 3 . We also provide a dual Yamada–Watanabe theorem for our framework. To formulate it we need more terminology.

Definition 2.4

  1. (i)

    We say that strong existence holds for the SPDE (1.1), if there exists a weak solution \((\mathbb {B}, W, X)\) such that X is adapted to the completion of the natural filtration of \(\overline{W}\).

  2. (ii)

    We say that strong uniqueness holds for the SPDE (1.1), if any two solution processes on the same driving system are indistinguishable.

The classical Yamada–Watanabe theorem for the Markovian version of our framework is given by [12, Theorem 5.3].

Corollary 2.5

(Dual Yamada–Watanabe Theorem) Weak Uniqueness and strong existence imply strong uniqueness and weak existence.

Proof

Due to Theorem 2.3, it suffices to show that weak joint uniqueness and strong existence imply strong uniqueness. To prove this, we follow the proof of [9, Theorem 8.3]. Let \(\mathsf {P}\) be the unique joint law of a solution process and its driver, and let \(\mathbb {W}\) be the unique law of a trace class \(\overline{H}\)-valued Brownian motion with covariance \(J J^*\). As strong existence holds, [10, Lemmata 1.13, 1.25] imply the existence of a measurable map \(F :C(\mathbb {R}_+, \overline{H}) \rightarrow \mathbb {C}= C(\mathbb {R}_+, {U})\) such that

$$\begin{aligned} \mathsf {P}(dx, dw) = \delta _{F (w)} (dx) \mathbb {W}(dw). \end{aligned}$$

Let \(((\Omega , {\mathcal {F}}, ({\mathcal {F}}_t)_{t \ge 0}, {\mathbb {P}}), W)\) be a driving system which supports two solution processes X and Y. Recalling that joint weak uniqueness holds, we obtain

$$\begin{aligned} {\mathbb {P}}\big (X = F(\overline{W})\big ) = {\mathbb {P}}\big (Y = F(\overline{W})\big ) = \iint \mathbb {1}_{\{x = F (w)\}} \mathsf {P}(dx, dw) = 1. \end{aligned}$$

Consequently, strong uniqueness holds and the proof is complete. \(\square \)

Let us relate weak solutions to so-called mild solutions, which are also frequently used in the literature (see, e.g. [15, 21]). Let \(L_2\) be the space of radonifying operators \(H \rightarrow {U}\). The following proposition is a direct consequence of [15, Theorem 13].

Proposition 2.6

Assume that \({U}\) is 2-smooth and that A is the generator of a \(C_0\)-semigroup \((S_t)_{t \ge 0}\) on \(U\). Let \((\mathbb {B}, W)\) be a driving system which supports a continuous \({U}\)-valued adapted process X such that a.s.

$$\begin{aligned} \int _0^t \Vert S_{t - s} \sigma _s (X)\Vert ^2_{L_2} ds < \infty , \quad t \in \mathbb {R}_+. \end{aligned}$$

Then, X is a solution process on \((\mathbb {B}, W)\) if and only if a.s. \(X \in \mathcal {I}\) and a.s.

$$\begin{aligned} X_t = S_t X_0 + \int _0^t S_{t - s} \mu _s (X) ds + \int _0^t S_{t - s} \sigma _s(X) d W_s, \quad t \in \mathbb {R}_+. \end{aligned}$$

This proposition shows that certain results from the literature are special cases of ours. For instance, Theorem 2.3 generalizes [21, Theorem 1.3], and Corollary 2.5 generalizes [21, Theorem 1.6].

We end this section with a comment on a possible application of our results. It is interesting to prove strong uniqueness for SPDEs. Similar to the finite dimensional case, Corollary 2.5 shows that strong uniqueness can be deduced from weak uniqueness and strong existence. This strategy is e.g. interesting for equations of the type

$$\begin{aligned} d X_t = (A X_t + \mu _t (X)) dt + d W_t, \end{aligned}$$
(2.3)

whose weak properties can be deduced via Girsanov’s theorem from the corresponding properties of the Ornstein–Uhlenbeck equation

$$\begin{aligned} d X_t = A X_t dt + d W_t, \end{aligned}$$

see [13, Appendix I] for such an argument. In other words, by the Yamada–Watanabe theorems (Corollary 2.5 and [12, Theorem 5.3]), typically strong existence and uniqueness are equivalent for (2.3). More generally, Girsanov’s theorem can be used to deduce weak properties for equations of the type

$$\begin{aligned} d X_t = (A X_t + \sigma _t (X) \mu _t (X)) dt + \sigma _t (X) d W_t \end{aligned}$$

from the corresponding properties of the equation

$$\begin{aligned} d X_t = A X_t dt + \sigma _t (X) d W_t. \end{aligned}$$

It is interesting to note that the strong uniqueness properties of (2.3) turn out to be quite subtle for general non-anticipating \(\mu \), in fact more subtle than for Markovian \(\mu \). For a Hilbert space setting and suitable linearities A, it was proven in [5, 6] that the Markovian equation

$$\begin{aligned} d X_t = (AX_t + \mu (X_t))dt + d W_t \end{aligned}$$

satisfies strong existence for every (locally) bounded \(\mu \). This remarkable result is not true for non-anticipating \(\mu \). Indeed, Tsirel’son’s example ([19, Section V.18]) shows that even for bounded non-anticipating \(\mu \) the SPDE (2.3) might not satisfy strong uniqueness.

3 Proof of Theorem 2.3

The if implication is obvious. Thus, we will only prove the only if implication. Assume that weak uniqueness holds for the SPDE (1.1).

Let X be a solution process to the SPDE (1.1) which is defined on a driving system \(((\Omega ^*, \mathcal {F}^*, (\mathcal {F}^*_t)_{t \ge 0}, {\mathbb {P}}^*), W)\). We take a second driving system \(((\Omega ^o, \mathcal {F}^o, (\mathcal {F}^o_t)_{t \ge 0}, {\mathbb {P}}^o), B)\) and set

$$\begin{aligned} \Omega \triangleq \Omega ^* \times \Omega ^o, \qquad \mathcal {F} \triangleq \mathcal {F}^* \otimes \mathcal {F}^o, \qquad {\mathbb {P}} \triangleq {\mathbb {P}}^* \otimes {\mathbb {P}}^o. \end{aligned}$$

Define \(\mathcal {F}_t\) to be the \({\mathbb {P}}\)-completion of the \(\sigma \)-field \(\bigcap _{s > t} (\mathcal {F}^*_s \otimes \mathcal {F}^o_s)\). In the following the filtered probability space \(\mathbb {B} = (\Omega , \mathcal {F}^{\mathbb {P}}, (\mathcal {F}_t)_{t \ge 0}, {\mathbb {P}})\) will be our underlying space. We extend XW and B to \(\mathbb {B}\) by setting

$$\begin{aligned} X (\omega ^*, \omega ^o) \equiv X(\omega ^*), \qquad W (\omega ^*, \omega ^o) \equiv W (\omega ^*), \qquad B(\omega ^*, \omega ^o) \equiv B(\omega ^o) \end{aligned}$$

for \((\omega ^*, \omega ^o) \in \Omega \). It is easy to see that \((\mathbb {B}, W)\) and \((\mathbb {B}, B)\) are again driving systems and that X is a solution process on \((\mathbb {B}, W)\).

For a closed linear subspace \(H^o\) of H we denote by \(\hbox {pr}_{H^o}\) the orthogonal projection onto \(H^o\). For \((\omega , t) \in \mathbb {C}\times \mathbb {R}_+\) we define

$$\begin{aligned} \phi _t(\omega ) \triangleq \hbox {pr}_{\hbox {ker} (\sigma _t (\omega ))} \in L (H), \qquad \psi _t (\omega ) \triangleq \hbox {Id}_{H} - \ \phi _t (\omega ) \in L(H). \end{aligned}$$

Let us summarize some basic properties of \(\phi \) and \(\psi \):

$$\begin{aligned} \begin{aligned} \phi ^2 = \phi , \quad \psi ^2 = \psi , \quad \sigma \phi = 0_{U},\quad \sigma \psi = \sigma , \quad \phi \psi = \psi \phi = 0_H. \end{aligned} \end{aligned}$$
(3.1)

The following lemma follows from an approximation argument (see the proof of [15, Lemma 9.2] for details).

Lemma 3.1

The processes \(\phi = (\phi _t)_{t \ge 0}\) and \(\psi = (\psi _t)_{t \ge 0}\) are progressively measurable as processes on the canonical space \((\mathbb {C}, \mathcal {C}, \mathbf {C})\).

By Lemma 3.1, we can define a sequence \(V = (V^k)_{k \in \mathbb {N}}\) of continuous local martingales via

$$\begin{aligned} V^k \triangleq \int _0^\cdot \langle \phi _t (X) e_k, d W_t \rangle _H + \int _0^\cdot \langle \psi _t (X)e_k, d B_t \rangle _H, \quad k \in \mathbb {N}. \end{aligned}$$

The following lemma is the technical core of the proof. We postpone its proof till the proof of Theorem 2.3 is complete.

Lemma 3.2

The process V is a standard \(\mathbb {R}^\infty \)-Brownian motion. Moreover, V is independent of X, i.e. the \(\sigma \)-fields \(\sigma (\overline{V}_t, t \in \mathbb {R}_+)\) and \(\sigma (X_t, t \in \mathbb {R}_+)\) are independent, where \(\overline{V}\) is defined by the formula

$$\begin{aligned} \overline{V} \triangleq \sum _{k = 1}^\infty V^k J e_k. \end{aligned}$$

For every \(k \in \mathbb {N}\), Proposition A.4 in Appendix A and (3.1) yield that

$$\begin{aligned} \int _0^\cdot \langle \phi _t (X) e_k, d V_t \rangle _{H}&= \int _0^\cdot \langle \phi _t (X) \phi _t (X) e_k, d W_t \rangle _{H} + \int _0^\cdot \langle \psi _t (X) \phi _t (X) e_k, d B_t \rangle _{H} \\ {}&= \int _0^\cdot \langle \phi _t (X) e_k, d W_t \rangle _{H}, \end{aligned}$$

and consequently,

$$\begin{aligned} \beta ^k = \int _0^\cdot \langle \psi _t (X) e_k, d W_t\rangle _H + \int _0^\cdot \langle \phi _t (X) e_k, d V_t \rangle _H, \quad k \in \mathbb {N}. \end{aligned}$$

By the construction of the stochastic integral, the law of the second term is determined by the law of (XV), cf. [10, Proposition 17.26] for a similar argument in a finite dimensional setting. In the following we explain that the same is true for the first term, borrowing some ideas from the proof of [15, Lemma 9.2]. In fact, we even show that its law is determined by the law of X. Define

$$\begin{aligned} H (t, \omega , x, y^*) \triangleq \Vert \sigma _t(\omega )^* y^* - \psi _t (\omega ) x \Vert _{H}, \quad (t, \omega , x, y^*) \in \mathbb {R}_+ \times \mathbb {C} \times H \times U^*. \end{aligned}$$

Lemma 3.3

For every \(T > 0\) and \(x \in H\) there exists a sequence \((\mathfrak {s}^m)_{m \in \mathbb {N}}\) of progressively measurable \(U^*\)-valued processes on \((\mathbb {C}, \mathcal {C}, \mathbf {C})\) such that

$$\begin{aligned} H(t, \omega , x, \mathfrak {s}^m_t(\omega )) \le \tfrac{1}{m}, \quad (t, \omega , m) \in [0, T] \times \mathbb {C} \times \mathbb {N}. \end{aligned}$$

Proof

We verify the prerequisites of [15, Proposition 8.8] for \(\mathbb {X} = {U}^*\) endowed with the norm topology: The process \(H(\cdot , \cdot , x , y^*)\) is progressively measurable by Lemma 3.1. It is clear that \(y^* \mapsto H (t, \omega , x, y^*)\) is continuous. Finally, we show that \(\{y^* \in U^* :H(t, \omega , x, y^*) < 1/m\} \not = \emptyset \) for every \(m \in \mathbb {N}\). Fix \((t, \omega ) \in \mathbb {R}_+ \times \mathbb {C}\) and note that

$$\begin{aligned} \psi _t (\omega ) (H) = {\text {ker}} (\sigma _t (\omega ) )^\perp = \overline{\sigma _t(\omega ) ^* (U^*)} \subseteq H, \end{aligned}$$

cf. [22, Satz III.4.5]. Thus, there exists a sequence \((y^*_m)_{m \in \mathbb {N}} \subset U^*\) such that

$$\begin{aligned} \lim _{m \rightarrow \infty }\Vert \sigma _t (\omega )^* y^*_m - \psi _t (\omega ) x\Vert _H = 0. \end{aligned}$$

We conclude that \(\{y^* \in U^* :H(t, \omega , x, y^*) < 1/m\} \not = \emptyset \) for every \(m \in \mathbb {N}\). In summary, the claim follows from [15, Proposition 8.8]. \(\square \)

Fix \(T > 0\) and \(x \in H\) and let \((\mathfrak {s}^m)_{m \in \mathbb {N}}\) be as in Lemma 3.3. Then, Proposition A.2 in Appendix A yields that

$$\begin{aligned} \sup _{t \in [0, T]} \Big | \int _0^t \langle \psi _s (X) x, d W_s \rangle _{H}&- \int _0^t \langle \sigma _s (X)^* \mathfrak {s}^m_s (X), d W_s \rangle _{H} \Big | \rightarrow 0 \end{aligned}$$

in probability as \(m \rightarrow \infty \). Define \(Z = \{Z (y^*) :y^* \in {U}^*\}\) by

$$\begin{aligned} Z (y^*) \triangleq \int _0^\cdot \langle \sigma _t (X)^* y^*, d W_t\rangle _{H}, \quad y^* \in {U}^*. \end{aligned}$$

Since

$$\begin{aligned} \int _0^\cdot \langle \sigma _s (X)^* \mathfrak {s}^m_s (X), d W_s\rangle _H = \int _0^\cdot \langle d Z_s, \mathfrak {s}^m_s (X) \rangle _U \end{aligned}$$

by Proposition A.4 in Appendix A, the construction of the stochastic integral implies that the law of \(\int _0^\cdot \langle \sigma _s (X)^* \mathfrak {s}^m_s (X), d W_s\rangle _{H}\) is determined by the finite dimensional distributions of (XZ). Thus, also the law of \(\int _0^\cdot \langle \psi _s (X) e_k, d W_s\rangle _{H}\) is determined by the finite dimensional distributions of (XZ).

Lemma 3.4

For every (finite) random time \(T:\mathbb {C}\rightarrow \mathbb {R}_+\) and \(y^* \in D(A^*)\) there exists a measurable map \(F :\mathbb {C}\rightarrow \overline{\mathbb {R}}\) such that a.s. \(Z_{T (X)} (y^*) = F(X)\).

Proof

We define

$$\begin{aligned} F(\omega )&\triangleq \langle \omega (T(\omega )), y^*\rangle _{{U}} - \langle x_0, y^*\rangle _{{U}} - \int _0^{T(\omega )} \langle \omega (s), A^* y^* \rangle _{{U}} ds \\&\qquad - \int _0^{T(\omega )} \langle \mu _s (\omega ), y^* \rangle _{{U}} ds, \end{aligned}$$

set to be \(+ \infty \) if the last integral diverges. The claim now follows from the definition of a weak solution. \(\square \)

Lemma 3.4 shows that the finite dimensional distributions of \(\{Z (y^*) :y^* \in D(A^*)\}\) are determined by the law of X. We now adapt an argument from the proof of [12, Lemma 4.1] to extend this observation to \(\{Z (y^*) :y^* \in U^*\}\). Define the localizing sequence

$$\begin{aligned} T_m \triangleq \inf \Big ( t \in \mathbb {R}_+ :\int _0^t \Vert \sigma _s (X) \Vert ^2_L ds \ge m\Big ), \quad m \in \mathbb {N}. \end{aligned}$$

Recall that \(D(A^*)\) is assumed to be sequentially weak\(^*\) dense. Thus, for every \(y^* \in {U}^*\) there exists a sequence \((y_k^*)_{k \in \mathbb {N}} \subset D(A^*)\) such that \(y^*_k \rightarrow y^*\) in the weak\(^*\) topology. Fix \(T > 0\) and \(m > 0\) and let be the Lebesgue measure on [0, T]. As \((y^*_k)_{k \in \mathbb {N}}\) is bounded by the uniform boundedness principle, the dominated convergence theorem yields that

$$\begin{aligned} \lim _{k \rightarrow \infty } {\mathbb {E}}\Big [ \int _0^{T \wedge T_m} \langle h (s), \sigma _s (X)^* y^*_k \rangle _H ds \Big ]&= {\mathbb {E}}\Big [ \int _0^{T \wedge T_m} \langle h (s), \sigma _s (X)^* y^* \rangle _H ds \Big ] \end{aligned}$$

for every . This means that

$$\begin{aligned} \sigma (X)^* y^*_k \mathbb {1}_{[0, T_m]} \rightarrow \sigma (X)^* y^* \mathbb {1}_{[0, T_m]} \end{aligned}$$

weakly in as \(k \rightarrow \infty \). By Mazur’s lemma ([22, Korollar III.3.9]), there exists a sequence \((x^*_k)_{k \in \mathbb {N}}\) in the convex hull of \((y^*_k)_{k \in \mathbb {N}}\) (and thus in \(D(A^*)\)) such that

$$\begin{aligned} \sigma (X)^* x^*_k \mathbb {1}_{[0, T_m]} \rightarrow \sigma (X)^* y^* \mathbb {1}_{[0, T_m]} \end{aligned}$$

strongly in as \(k \rightarrow \infty \). Hence, Proposition A.2 in Appendix A yields that

$$\begin{aligned} \sup _{s \in [0, T]} \big | Z_{s \wedge T_m} (x^*_k) - Z_{s \wedge T_m} (y^*) \big | \rightarrow 0 \end{aligned}$$

in probability as \(k \rightarrow \infty \). Finally, we conclude from Lemma 3.4 that the finite dimensional distributions of (XZ) are determined by the law of X.

In summary, the law of (XW) is determined by the law of (XV) and hence, by Lemma 3.2, it is determined by the law of X. The proof is complete. \(\square \)

It remains to prove Lemma 3.2:

Proof of Lemma 3.2:

Step 1. Recall that each \(V^k\) is a continuous local martingale by the definition of the stochastic integral. Denote the quadratic variation process by \([\cdot , \cdot ]\). For \(i, j \in \mathbb {N}\) and \(t \in \mathbb {R}_+\), using Proposition A.3 in Appendix A and the self-adjointness of \(\phi \) and \(\psi \), we obtain

$$\begin{aligned}{}[V^i, V^j]_t&= \Big [ \int _0^\cdot \langle \phi _s (X) e_i, d W_s \rangle _H , \int _0^\cdot \langle \phi _s (X) e_j, d W_s \rangle _H\Big ]_t \\&\qquad \qquad + \Big [ \int _0^\cdot \langle \psi _s (X) e_i, d B_s \rangle _H, \int _0^\cdot \langle \psi _s (X) e_j, d B_s \rangle _H \Big ]_t \\ {}&= \int _0^t \langle \phi _s (X) e_i, \phi _s (X) e_j\rangle _H ds + \int _0^t \langle \psi _s (X) e_i, \psi _s (X) e_j\rangle _H ds \\ {}&= \int _0^t \langle (\phi _s (X) + \psi _s (X) ) e_i, e_j \rangle _H ds = t \mathbb {1}_{\{i = j\}}. \end{aligned}$$

Lévy’s characterization implies that V is a standard \(\mathbb {R}^\infty \)-Brownian motion.

Step 2. In this step we prepare the proof of the independence of V, or more precisely \(\overline{V}\), and X. Let \(C^2_b (\mathbb {R})\) be the set of bounded twice continuously differentiable functions with bounded first and second derivative.

Lemma 3.5

Let \(\overline{Y}\) be a continuous adapted \(\overline{H}\)-valued process starting at \(\overline{Y}_0 = 0_{\overline{H}}\). For \(h \in \overline{H}\) set \(\overline{Y} (h) \triangleq \langle \overline{Y}, h\rangle _{\overline{H}}\). The following are equivalent:

  1. (i)

    \(\overline{Y}\) is a trace class Brownian motion with covariance \(J J^*\).

  2. (ii)

    For all \(f \in C^2_b(\mathbb {R})\) with \(\inf _{x \in \mathbb {R}} f(x) > 0\) and \(f (0) = 1\), and all \(h \in \overline{H}\) the process

    $$\begin{aligned} M^f \triangleq f(\overline{Y}(h) )\exp \Big ( - \frac{\langle J J^* h, h \rangle _{\overline{H}}}{2} \int _0^\cdot \frac{f''(\overline{Y}_s (h)) ds}{f (\overline{Y}_s (h))} \Big ) \end{aligned}$$
    (3.2)

    is a martingale.

Proof

By the classical martingale problem for (one dimensional) Brownian motion (see, e.g. [20, Theorem 4.1.1]) and [7, Proposition 4.3.3], (ii) holds if and only if \(\overline{Y} (h)\) is a one dimensional Brownian motion with covariance \(\langle J J^* h , h \rangle _{\overline{H}}\) for all \(h \in \overline{H}\). This yields the claim. \(\square \)

For \(f = g(\langle \cdot , y^*\rangle _{U})\) with \(g \in C^2 (\mathbb {R})\) and \(y^* \in D(A^*)\) we define

$$\begin{aligned} \mathcal {L} f (\mathsf {X}, t) \triangleq g'(\langle \mathsf {X}_t, y^* \rangle _{U}) \big ( \langle \mathsf {X}_t&, A^* y^*\rangle _{U}+ \langle \mu _t (\mathsf {X}), y^*\rangle _{U}\big ) \\ {}&+ \tfrac{1}{2} g'' (\langle \mathsf {X}_t, y^*\rangle _{U}) \langle \sigma _t (\mathsf {X})^* y^*,\sigma _t (\mathsf {X})^* y^*\rangle _{H}. \end{aligned}$$

Furthermore, we set

$$\begin{aligned} \mathfrak {X} \triangleq \big \{f = g(\langle \cdot , y^* \rangle _{U}) :g \in C^2(\mathbb {R}), y^* \in D(A^*) \big \}. \end{aligned}$$

The following is a version of [12, Theorem 3.6] for our framework.

Lemma 3.6

A probability measure \(\mathbb {Q}'\) on \((\mathbb {C}, \mathcal {C}, \mathbf {C})\) is the law of a solution process to the SPDE (1.1) if and only if \({\mathbb {Q}'(\mathcal {I}, {\mathsf {X}}_0 = x_0) = 1}\) and for all \(f \in \mathfrak {X}\) the process

$$\begin{aligned} K^f \triangleq f (\mathsf {X}) - f(x_0) - \int _0^\cdot \mathcal {L} f (\mathsf {X}, s) ds \end{aligned}$$
(3.3)

is a local \((\mathbf {C}^{\mathbb {Q}'}, \mathbb {Q}')\)-martingale, where \(\mathbf {C}^{\mathbb {Q}'}\) denotes the \(\mathbb {Q}'\)-completion of \(\mathbf {C}\). Furthermore, for every solution process X to the SPDE (1.1) the process \(K^f \circ X\) is a local martingale on the corresponding driving system.

Proof

The structure of the proof is classical and similar to the finite dimensional case (see, e.g. [11, Chapter 5.4]). Let \(\mathbb {Q}'\) be a solution measure to the SPDE (1.1) and let \((\mathbb {B}, W, X)\) be a weak solution such that \(\mathbb {B}=(\Omega , \mathcal {F}, (\mathcal {F}_t)_{t \ge 0}, \mathbb {P})\) and \(\mathbb {Q}' = \mathbb {P} \circ X^{-1}\). Take \(f = g(\langle \cdot , y^*\rangle _{U}) \in \mathfrak {X}\). Then, Itô’s formula yields that

$$\begin{aligned} K^f \circ X = \int _0^\cdot g'(\langle X_s, y^* \rangle _{U}) d \Big (\int _0^s \langle \sigma _u(X)^* y^*, dW_u\rangle _H\Big ). \end{aligned}$$
(3.4)

Hence, \(K^f \circ X\) is a local martingale. Due to [8, Remark 10.40], the local martingale property transfers to the canonical space \((\mathbb {C}, \mathcal {C}^{\mathbb {Q}'}, \mathbf {C}^{\mathbb {Q}'}, \mathbb {Q}')\) and the only if implication follows.

Conversely, let \(\mathbb {Q}'\) be as in the statement of the lemma. Then, using the hypothesis with \(g(x) = x\) and \(g(x) = x^2\) and similar arguments as in the proof of [11, Proposition 5.4.6], for every \(y^* \in D(A^*)\) it follows that

$$\begin{aligned} \mathsf {Y}(y^*) \triangleq \langle \mathsf {X}, y^*\rangle _{U}- \langle \mathsf {X}_0, y^*\rangle _{U}- \int _0^\cdot \langle \mathsf {X}_s, A^* y^*\rangle _{U}ds -\int _0^\cdot \langle \mu _s (\mathsf {X}), y^*\rangle _{U}ds \end{aligned}$$
(3.5)

is a local \((\mathbf {C}^{\mathbb {Q}'}, \mathbb {Q}')\)-martingale with quadratic variation

$$\begin{aligned}{}[ \mathsf {Y}(y^*), \mathsf {Y}(y^*) ] = \int _0^\cdot \langle \sigma _s (\mathsf {X})^* y^*, \sigma _s(\mathsf {X})^* y^*\rangle _{H} ds. \end{aligned}$$

As \(D(A^*)\) is supposed to be weak\(^*\) dense in \({U}^*\), it separates points of \({U}\). Thus, we deduce from [16, Theorem 3.1] that, possibly on an extension of the filtered probability space \((\mathbb {C}, \mathcal {C}^{\mathbb {Q}'}, \mathbf {C}^{\mathbb {Q}'}, \mathbb {Q}')\), there exists a standard \(\mathbb {R}^\infty \)-Brownian motion W such that

$$\begin{aligned} \mathsf {Y}(y^*) = \int _0^\cdot \langle \sigma _s (\mathsf {X})^* y^*, d W_s\rangle _H, \quad y^* \in D(A^*). \end{aligned}$$

Due to (3.5), we conclude the if implication. The proof is complete. \(\square \)

Define \(M^f\) and \(K^g\) as in (3.2) and (3.3) with \(\overline{Y}\) replaced by \(\overline{V}\) and \(\mathsf {X}\) replaced by X. It follows from Lemma 3.5 and Step 1 that \(M^f\) is a martingale. Similarly, because X is a solution process to the SPDE (1.1), \(K^g\) is a local martingale by Lemma 3.6. We now show that \([M^f, K^g] = 0\). Itô’s formula yields that

$$\begin{aligned} dM^f_t = \exp \Big ( - \frac{\langle J J^* h, h \rangle _{\overline{H}}}{2} \int _0^t \frac{f'' (\overline{V}_s (h)) ds}{f(\overline{V}_s (h))} \Big ) f'(\overline{V}_t (h)) d \overline{V}_t (h). \end{aligned}$$
(3.6)

Using Proposition A.3 in Appendix A, we deduce from (3.1) that

$$\begin{aligned} \Big [ \overline{V}(h), \int _0^{\cdot } \langle \sigma _s(X)^* y^*, d W_s\rangle _{H}\Big ]&= \Big [ \int _0^\cdot \langle \phi _s (X) J^*h, d W_s \rangle _{H}, \int _0^\cdot \langle \sigma _s (X)^* y^*, d W_s \rangle _{H} \Big ] \\&= \int _0^\cdot \langle \sigma _s (X) \phi _s (X) J^* h,y^* \rangle _{{U}} ds = 0. \end{aligned}$$

In view of (3.4) and (3.6), we conclude that \([M^f, K^g] = 0\).

Step 3: We are in the position to follow the proof of [3, Theorem 2.3]. More precisely, we deduce the independence of V and X from \([M^f, K^g] = 0\). For \(n \in \mathbb {N}\) set

$$\begin{aligned} T_n \triangleq \inf (t \in \mathbb {R}_+ :|K^g_t| \ge n), \qquad K^{g, n} \triangleq K^g_{\cdot \wedge T_n}. \end{aligned}$$

As \(K^g\) has continuous paths, \(K^{g, n}\) is bounded on bounded time intervals and consequently, \(K^{g, n}\) is a martingale. Step 2 yields that \([M^{f}, K^{g, n}] = [M^f, K^g]_{\cdot \wedge T_n} = 0\). Hence, by integration by parts, the process \(M^f K^{g, n}\) is a local martingale and a true martingale, because it is bounded on bounded time intervals. Next, fix a bounded stopping time S and define a measure \(\mathbb {Q}'\) on \((\Omega , \mathcal {F})\) as follows:

$$\begin{aligned} \mathbb {Q}' (G) \triangleq {\mathbb {E}}^{\mathbb {P}} \big [ M^f_S \mathbb {1}_G \big ], \qquad G \in \mathcal {F}. \end{aligned}$$

As \(M^f_0 = 1\), the optional stopping theorem shows that \(\mathbb {Q}'\) is a probability measure. Since \(M^f, K^{g, n}\) and \(M^f K^{g, n}\) are \({\mathbb {P}}\)-martingales, we deduce again from the optional stopping theorem that for every bounded stopping time T

$$\begin{aligned} {\mathbb {E}}^{\mathbb {Q}'} \big [ K^{g, n}_T \big ]&= {\mathbb {E}}^{\mathbb {P}} \big [M^f_S K^{g, n}_T\big ] \\ {}&= {\mathbb {E}}^{\mathbb {P}} \big [ M^f_S \mathbb {1}_{\{S \le T\}} {\mathbb {E}}^{\mathbb {P}}\big [K^{g, n}_T | \mathcal {F}_{S \wedge T} \big ] + K^{g, n}_T \mathbb {1}_{\{T< S\}} {\mathbb {E}}^{\mathbb {P}}\big [M^f_S | \mathcal {F}_{S \wedge T} \big ]\big ] \\ {}&= {\mathbb {E}}^{\mathbb {P}} \big [ M^f_SK^{g, n}_{S \wedge T}\mathbb {1}_{\{S \le T\}} + K^{g, n}_T M^f_{S \wedge T}\mathbb {1}_{\{T < S\}}\big ] \\ {}&= {\mathbb {E}}^{\mathbb {P}} \big [ M^f_{S \wedge T} K^{g, n}_{S \wedge T} \big ] = 0. \end{aligned}$$

Thus, because T was arbitrary and \(T_n \nearrow \infty \) as \(n \rightarrow \infty \), \(K^{g}\) is a local \(\mathbb {Q}'\)-martingale. As g was arbitrary, we deduce from Lemma 3.6 and [8, Remark 10.40] that \(\mathbb {Q}' \circ X^{-1}\) is a solution measure to the SPDE (1.1). The weak uniqueness assumption now implies that \({\mathbb {P}} \circ X^{-1} = \mathbb {Q}' \circ X^{-1}\). Next, fix a set \(F \in \sigma (X_t, t \in \mathbb {R}_+)\) such that \({\mathbb {P}} (F) > 0\) and set

$$\begin{aligned} \mathbb {Q}^* (G) \triangleq \frac{{\mathbb {P}}(G, F)}{{\mathbb {P}}(F)},\quad G \in \mathcal {F}. \end{aligned}$$

Clearly, \(\mathbb {Q}^*\) is a probability measure on \((\Omega , \mathcal {F})\). Recalling \({\mathbb {P}}(F) = \mathbb {Q}'(F)\), we obtain

$$\begin{aligned} {\mathbb {E}}^{\mathbb {Q}^*} \big [ M^f_S \big ] = \frac{\mathbb {Q}'(F)}{{\mathbb {P}}(F)} = 1. \end{aligned}$$

Thus, because S was arbitrary, \(M^f\) is a \(\mathbb {Q}^*\)-martingale. Since f was arbitrary, Lemma 3.5 yields that \(\overline{V}\) is a trace class \(\mathbb {Q}^*\)-Brownian motion with covariance \(J J^*\). Consequently, for every \(G \in \sigma (\overline{V}_t, t \in \mathbb {R}_+)\) we have

$$\begin{aligned} {\mathbb {P}}(G, F) = \mathbb {Q}^*(G) {\mathbb {P}}(F) = {\mathbb {P}}(G) {\mathbb {P}}(F). \end{aligned}$$

Since this equality holds trivially whenever \(F \in \sigma (X_t, t \in \mathbb {R}_+)\) satisfies \(\mathbb {P}(F) = 0\), we conclude that V and X are independent. The proof is complete. \(\square \)