Appendix
1.1 Auxiliary lemmas and their proofs
Lemma 1
An \(n \times n\) matrix A is circulant tridiagonal if \(A_{j,i} = 0\) for all pairs of (j, i) such that \(d(j,i) > 1\) (recall \(d(j,i) = \min \{ |j-i|, |j+n-i|, |i+n-j| \}\)). For such type of matrix, we have
$$\begin{aligned} |(\mathrm{{e}}^{A})_{j,i}| \le \mathrm{{e}}^{-C\cdot d(j,i)}\cdot \mathrm{{e}}^{(\mathrm{{e}}^{C}+\mathrm{{e}}^{-C}+1)\rho }, \quad \forall \ j,i=1,2,\ldots ,n, \end{aligned}$$
where \(\rho = \max \limits _{j,i = 1,2,\ldots ,n} |A_{j,i}|\), C is any fixed positive constant.
Remark 1
If A is further a tridiagonal matrix, we have
$$\begin{aligned} |(\mathrm{{e}}^{A})_{j,i}| \le \mathrm{{e}}^{-C\cdot |j-i|}\cdot \mathrm{{e}}^{(\mathrm{{e}}^{C}+\mathrm{{e}}^{-C}+1)\rho }, \quad \forall \ j,i=1,2,\ldots ,n. \end{aligned}$$
Proof
Firstly, by using mathematical induction, we prove that with any positive constant C,
$$\begin{aligned} \begin{aligned}&|(A^{m})_{j,i}| \\&\le (\mathrm{{e}}^{C}+\mathrm{{e}}^{-C}+1)^{m}\cdot \rho ^m\cdot \mathrm{{e}}^{-C\cdot d(j,i)}, \ \text {for} \ m=0,1,\ldots . \end{aligned} \end{aligned}$$
(31)
The conclusion is true for \(m=0\) since \(A^{0} = {\mathbf {I}}_n\) and \(\mathrm{{e}}^{-C\cdot d(j,i)} > 0\) with \(\mathrm{{e}}^{-C\cdot d(j,j)} = 1\). If the result is true for \(m=k\), namely we have \(|(A^{k})_{j,i}| \le (\mathrm{{e}}^{C}+\mathrm{{e}}^{-C}+1)^{k}\cdot \rho ^k\cdot \mathrm{{e}}^{-C\cdot d(j,i)}\). Then, for \(m=k+1\), we observe that
$$\begin{aligned} \begin{aligned}&(A^{k+1})_{j,i} \\&= (A^{k})_{j,i-1}A_{i-1,i} + (A^{k})_{j,i}A_{i,i} + (A^{k})_{j,i+1}A_{i+1,i}, \end{aligned} \end{aligned}$$
(32)
with \((A^{k})_{j,0} = (A^{k})_{j,n}\), \((A^{k})_{j, n+1} = (A^{k})_{j,1}\), \(A_{0,i} = A_{n,i}\), and \(A_{n+1, i} = A_{1,i}\). Since \(|A_{j,i}| \le \rho \), together with (31) for \(m=k\), (32) gives us
$$\begin{aligned} \begin{aligned}&|(A^{k+1})_{j,i}| \\&\le | (A^{k})_{j,i-1}\Vert A_{i-1,i} | + |(A^{k})_{j,i}\Vert A_{i,i}| + |(A^{k})_{j,i+1}\Vert A_{i+1,i} | \\&\le (\mathrm{{e}}^{C}+\mathrm{{e}}^{-C}+1)^{k}\cdot \rho ^k\cdot \mathrm{{e}}^{-C\cdot d(j,i-1)} \cdot \rho \\&\qquad + (\mathrm{{e}}^{C}+\mathrm{{e}}^{-C}+1)^{k}\cdot \rho ^k\cdot \mathrm{{e}}^{-C\cdot d(j,i)} \cdot \rho \\&\qquad + (\mathrm{{e}}^{C}+\mathrm{{e}}^{-C}+1)^{k}\cdot \rho ^k\cdot \mathrm{{e}}^{-C\cdot d(j,i+1)} \cdot \rho . \end{aligned} \end{aligned}$$
(33)
We claim that
$$\begin{aligned} \begin{aligned}&\mathrm{{e}}^{-C\cdot d(j,i-1)} + \mathrm{{e}}^{-C\cdot d(j,i)} + \mathrm{{e}}^{-C\cdot d(j,i+1)} \\&\le (\mathrm{{e}}^{C}+\mathrm{{e}}^{-C}+1)\mathrm{{e}}^{-C\cdot d(j,i)}, \end{aligned} \end{aligned}$$
(34)
which will be proved later. Combining it with (33), we obtain
$$\begin{aligned} |(A^{k+1})_{j,i}| \le (\mathrm{{e}}^{C}+\mathrm{{e}}^{-C}+1)^{k+1}\cdot \rho ^{k+1}\cdot \mathrm{{e}}^{-C\cdot d(j,i)}, \end{aligned}$$
(35)
this finishes the proof of (31). Claim (34) can be proved by the following case-by-case discussion. For the case \(n=2k\) with \(k\in {\mathbb {N}}^{+}\), we see \(d(j,i) \le k\) from the definition of d(j, i). If \(d(j,i) < k\), we have
$$\begin{aligned}&\mathrm{{e}}^{-C\cdot d(j,i-1)} + \mathrm{{e}}^{-C\cdot d(j,i)} + \mathrm{{e}}^{-C\cdot d(j,i+1)} \\&= \mathrm{{e}}^{-C\cdot (d(j,i)+1)} + \mathrm{{e}}^{-C\cdot d(j,i)} + \mathrm{{e}}^{-C\cdot (d(j,i)-1)} \\&= (\mathrm{{e}}^{C}+\mathrm{{e}}^{-C}+1)\mathrm{{e}}^{-C\cdot d(j,i)}. \end{aligned}$$
Otherwise, if \(d(j,i) = k\), we have
$$\begin{aligned}&\mathrm{{e}}^{-C\cdot d(j,i-1)} + \mathrm{{e}}^{-C\cdot d(j,i)} + \mathrm{{e}}^{-C\cdot d(j,i+1)} \\&= \mathrm{{e}}^{-C\cdot (d(j,i)-1)} + \mathrm{{e}}^{-C\cdot d(j,i)} + \mathrm{{e}}^{-C\cdot (d(j,i)-1)} \\&\le (\mathrm{{e}}^{-C}+\mathrm{{e}}^{-C}+1)\mathrm{{e}}^{-C\cdot d(j,i)}. \end{aligned}$$
For \(n=2k-1\) with \(k\in {\mathbb {N}}^{+}\), we observe \(d(j,i) \le k-1\). If \(d(j,i) < k-1\), we have
$$\begin{aligned}&\mathrm{{e}}^{-C\cdot d(j,i-1)} + \mathrm{{e}}^{-C\cdot d(j,i)} + \mathrm{{e}}^{-C\cdot d(j,i+1)} \\&= \mathrm{{e}}^{-C\cdot (d(j,i)-1)} + \mathrm{{e}}^{-C\cdot d(j,i)} + \mathrm{{e}}^{-C\cdot (d(j,i)+1)} \\&= (\mathrm{{e}}^{C}+\mathrm{{e}}^{-C}+1)\mathrm{{e}}^{-C\cdot d(j,i)}. \end{aligned}$$
Otherwise, if \(d(j,i) =k-1\), we have
$$\begin{aligned}&\mathrm{{e}}^{-C\cdot d(j,i-1)} + \mathrm{{e}}^{-C\cdot d(j,i)} + \mathrm{{e}}^{-C\cdot d(j,i+1)} \\&= \mathrm{{e}}^{-C\cdot (d(j,i)-1)} + \mathrm{{e}}^{-C\cdot d(j,i)} + \mathrm{{e}}^{-C\cdot d(j,i)} \\&\le (\mathrm{{e}}^{-C}+1+1)\mathrm{{e}}^{-C\cdot d(j,i)}. \end{aligned}$$
For all the cases discussed above, we always have (34).
With (31) proved, we have
$$\begin{aligned}&|(\mathrm{{e}}^{A})_{j,i}| = \Big | \sum _{m=0}^{\infty } \frac{1}{m!}\cdot (A^m)_{j,i} \Big | \le \sum _{m=0}^{\infty } \frac{1}{m!}\cdot |(A^m)_{j,i}| \\&\le \sum _{m=0}^{\infty } \frac{1}{m!}\cdot (\mathrm{{e}}^{C}+\mathrm{{e}}^{-C}+1)^{m}\cdot \rho ^m\cdot \mathrm{{e}}^{-C\cdot d(j,i)} \\&= \mathrm{{e}}^{(\mathrm{{e}}^{C}+\mathrm{{e}}^{-C}+1)\rho } \cdot \mathrm{{e}}^{-C\cdot d(j,i)}. \end{aligned}$$
The proof is complete. \(\square \)
Lemma 2
(Gronwall’s inequality) Given an \(n \times n\) matrix M whose entries are positive, and if the two \(n \times 1\) vectors \({\mathbf {x}}(t)\), \({\mathbf {b}}(t)\) satisfy
$$\begin{aligned} M\cdot {\mathbf {x}}(t) + {\mathbf {b}}(t) \succeq \dfrac{\mathrm{{d}}{\mathbf {x}}(t)}{\mathrm{{d}}t}, \ \text {for} \ t \in [0, T], \end{aligned}$$
(36)
(recall \(\succeq \) is entry-wise inequality, as defined in Sect. 1.4), then
$$\begin{aligned} \mathrm{{e}}^{MT}\cdot {\mathbf {x}}(0) + \int _0^T \mathrm{{e}}^{M(T-t)} \cdot \mathbf{b} (t) \mathrm{{d}}t\succeq {\mathbf {x}}(T). \end{aligned}$$
(37)
Proof
Consider an n-dimensional column vector \({\mathbf {y}}(t)\) defined as
$$\begin{aligned} \frac{\mathrm{{d}}{\mathbf {y}}(t)}{\mathrm{{d}}t} = M\cdot {\mathbf {y}}(t) + {\mathbf {b}}(t), \ \text {for} \ t \in [0, T], \end{aligned}$$
with initial value \({\mathbf {y}}(0) = {\mathbf {x}}(0) + \epsilon \cdot {\mathbf {1}}_{n \times 1}\), where \(\epsilon \) is a positive constant. Its solution can be written as
$$\begin{aligned} {\mathbf {y}}(T) = \mathrm{{e}}^{MT}\cdot ( {\mathbf {x}}(0) + \epsilon \cdot {\mathbf {1}}_{n\times 1} ) + \int _0^T \mathrm{{e}}^{M(T-t)} \cdot \mathbf{b} (t) \mathrm{{d}}t. \end{aligned}$$
Define \({\mathbf {z}}(t) = {\mathbf {y}}(t) - {\mathbf {x}}(t)\) for \(t \in [0,T]\), then \({\mathbf {z}}(0) = \epsilon \cdot {\mathbf {1}}_{n \times 1} \succeq {\mathbf {0}}_{n \times 1}\). And according to (36), we have
$$\begin{aligned} \frac{\mathrm{{d}}{\mathbf {z}}(t)}{\mathrm{{d}}t}&= \frac{\mathrm{{d}}{\mathbf {y}}(t)}{\mathrm{{d}}t} - \frac{\mathrm{{d}}{\mathbf {x}}(t)}{\mathrm{{d}}t} \\&= M\cdot {\mathbf {y}}(t) + {\mathbf {b}}(t) - \frac{\mathrm{{d}}{\mathbf {x}}(t)}{\mathrm{{d}}t} \\&\succeq M \cdot ({\mathbf {y}}(t) - {\mathbf {x}}(t)) = M \cdot {\mathbf {z}}(t). \end{aligned}$$
We define \(\tau = \inf \{ t: \text {there exists at least one index}~\)i\(, \text {such} \text {that} \ {\mathbf {z}}_{i}(t) < 0. \}\), and note that \(\tau > 0\) by continuity of \({\mathbf {z}}(t)\). According to the definition, we have \({\mathbf {z}}(t) \succeq {\mathbf {0}}_{n \times 1}\) for \(t \in [0,\tau )\). Together with that all the entries of M are positive, we have \(\frac{\mathrm{{d}}{\mathbf {z}}(t)}{\mathrm{{d}}t} \succeq M \cdot {\mathbf {z}}(t) \succeq {\mathbf {0}}_{n \times 1}\) for \(t \in [0, \tau )\). It implies that \({\mathbf {z}}(t)\) is entry-wisely nondecreasing in \([0, \tau )\), thus \({\mathbf {z}}(t) \succeq {\mathbf {z}}(0) = \epsilon \cdot {\mathbf {1}}_{n \times 1}\) for \(t \in [0, \tau )\), namely
$$\begin{aligned} \begin{aligned}&\mathrm{e}^{Mt}\cdot ( {\mathbf {x}}(0) + \epsilon \cdot {\mathbf {1}}_{n\times 1} ) + \int _0^t \mathrm{{e}}^{M(t-s)} \cdot \mathbf{b} (s) {\mathrm{d}}s - \mathbf{x}(t) \\&\quad \succeq \epsilon \cdot {\mathbf {1}}_{n \times 1}. \end{aligned} \end{aligned}$$
(38)
We show that \(\tau > T\), which can be proved by contradiction. Suppose that \(\tau \le T\). From the definition of \(\tau \) and the previous analyses, we know that \({\mathbf {z}}(t) \succeq \epsilon \cdot {\mathbf {1}}_{n \times 1}\) when \(t \in [0,\tau )\), and there exists at least one index, say i, such that \({\mathbf {z}}_{i}(\tau ) < 0\). Thus, we have \(\lim \limits _{t \rightarrow \tau ^{-}} {\mathbf {z}}_{i}(t) - {\mathbf {z}}_{i}(\tau ) \ge \epsilon > 0\). This result contradicts the continuity of \({\mathbf {z}}(t)\) for \(t \in [0,T]\) and gives us the result \(\tau > T\).
Conclusion (37) is obtained by taking \(\epsilon \rightarrow 0\) in (38), together with \(\tau > T\). \(\square \)
Lemma 3
For two \(n \times n\) matrices M and N whose entries are positive, if
$$\begin{aligned} M \preceq N, \end{aligned}$$
(39)
we have
$$\begin{aligned} \mathrm{{e}}^{M} \preceq \mathrm{{e}}^{N}. \end{aligned}$$
(40)
Proof
By mathematical induction, we firstly prove
$$\begin{aligned} M^{k} \preceq N^{k}, \quad \text {for} \ k=0,\ldots ,\infty . \end{aligned}$$
(41)
The result is true for \(k=0\), since \(M^{0} = N^{0} = {\mathbf {I}}_n\). If the result is true for \(k=K\), namely we have
$$\begin{aligned} M^{K} \preceq N^{K}. \end{aligned}$$
(42)
Then for \(k= K+1\), since
$$\begin{aligned} (M^{K+1})_{i,j}&= \sum _{s=1}^{n}(M^{K})_{i,s}M_{s,j}, \\ (N^{K+1})_{i,j}&= \sum _{s=1}^{n}(N^{K})_{i,s}N_{s,j}, \ \ \text {for} \ i,j = 1,\ldots ,n, \end{aligned}$$
the result \((M^{K+1})_{i,j} \le (N^{K+1})_{i,j}\) follows from (39) and (42).
According to the definition of matrix exponential, we have
$$\begin{aligned} \mathrm{{e}}^{M} - \mathrm{{e}}^{N} = \sum _{k=0}^{+\infty } \dfrac{1}{k!} (M^{k} - N^{k}), \end{aligned}$$
conclusion (40) naturally follows from (41). \(\square \)
Lemma 4
(Volterra’s inequality) Given an \(n \times n\) matrix M whose entries are positive, if the two \(n \times 1\) vectors \({\mathbf {x}}(t)\), \({\mathbf {b}}(t)\) satisfy
$$\begin{aligned} {\mathbf {x}}(t) \preceq {\mathbf {b}}(t) + \int _{0}^{t}M\cdot {\mathbf {x}}(s) \mathrm{{d}}s, \ \text {for} \ t \in [0, T], \end{aligned}$$
(43)
we have
$$\begin{aligned} {\mathbf {x}}(T) \preceq {\mathbf {b}}(T) + \int _{0}^{T} \mathrm{{e}}^{M(T-t)}\cdot M \cdot {\mathbf {b}}(t)\mathrm{{d}}t. \end{aligned}$$
(44)
Proof
We define \({\mathbf {y}}(t) = \int _{0}^{t} M\cdot {\mathbf {x}}(s)\mathrm{{d}}s\) for \(t \in [0, T]\), then (43) gives us
$$\begin{aligned} {\mathbf {x}}(t) \preceq {\mathbf {b}}(t) + {\mathbf {y}}(t), \ \text {for} \ t \in [0,T]. \end{aligned}$$
(45)
And we observe that
$$\begin{aligned} \dfrac{\mathrm{{d}}{\mathbf {y}}(t)}{\mathrm{{d}}t} = M\cdot {\mathbf {x}}(t) \preceq M\cdot {\mathbf {b}}(t) + M\cdot {\mathbf {y}}(t), \end{aligned}$$
with \({\mathbf {y}}(0) = 0\). According to Lemma 2, we have
$$\begin{aligned} {\mathbf {y}}(T) \preceq \int _{0}^{T} \mathrm{{e}}^{M(T-t)}\cdot M \cdot {\mathbf {b}}(t)\mathrm{{d}}t. \end{aligned}$$
(46)
Conclusion (44) follows from (45) with \(t=T\) and (46). \(\square \)
1.2 Main lemmas and their proofs
Lemma 5
Under Assumption 1, with \(\mathbf {x}^{o }(t), \mathbf {x}^{p }(t)\) for \(t \in [0,T]\) defined in Sect. 3.1, we have, for \(j=1,\ldots ,m\),
$$\begin{aligned} \begin{aligned}&{\mathbf {E}}[\Vert \mathbf {x}^\mathrm{{o}}_j(t) - \mathbf {x}_j^\mathrm{{p}}(t)\Vert ^2] \\&\le \Vert \mathbf {x}^{o }_{i_\star }-\mathbf {x}^{p }_{i_\star } \Vert ^2\cdot \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma }) t} \cdot \mathrm{{e}}^{-C_d\cdot d(j,i_{\star })}, \end{aligned} \end{aligned}$$
(47)
where \(C_d\) is any given positive constant, \(C_1({\mathbf {f}}, \varvec{\sigma })\) is defined as (11).
Proof
Recall that \(\mathbf {x}^{\text {o}}_j(t)\), \(\mathbf {x}_j^{\text {p}}(t)\) are solutions of
$$\begin{aligned}&\mathrm{{d}}{\mathbf {x}}_j(t) = {\mathbf {f}}_j(t, {\mathbf {x}}_{j-1}(t), {\mathbf {x}}_j(t),{\mathbf {x}}_{j+1}(t) )\mathrm{{d}}t \\&\qquad \qquad + \varvec{\sigma }_j(t,{\mathbf {x}}_j(t))\mathrm{{d}}{\mathbf {W}}_j(t), \ \text {for} \ j=1,\ldots ,m,\\&{\mathbf {x}}_{0}(t) = {\mathbf {x}}_{m}(t), \ {\mathbf {x}}_{m+1}(t) = {\mathbf {x}}_{1}(t), \ t \in [0,T], \end{aligned}$$
with corresponding initial values \(\mathbf {x}^{\text {o}}_j(0)\), \(\mathbf {x}_j^{\text {p}}(0)\), which only differ at the \(i_{\star }\)th entry. Thus, we have
$$\begin{aligned} \mathrm{{d}}({\mathbf {x}}^{\text {o}}_j(t) - {\mathbf {x}}^{\text {p}}_j(t)) = \Delta _{{\mathbf {f}}_j}(t) \mathrm{{d}}t + \Delta _{\varvec{\sigma }_{j}}(t)\mathrm{{d}}{\mathbf {W}}_j(t), \end{aligned}$$
with
$$\begin{aligned}&\Delta _{{\mathbf {f}}_j}(t) = {\mathbf {f}}_j(t, {\mathbf {x}}^{\text {o}}_{j-1}(t), {\mathbf {x}}^{\text {o}}_j(t),{\mathbf {x}}^{\text {o}}_{j+1}(t) ) \\&\qquad \qquad - {\mathbf {f}}_j(t, {\mathbf {x}}^{\text {p}}_{j-1}(t), {\mathbf {x}}^{\text {p}}_j(t),{\mathbf {x}}^{\text {p}}_{j+1}(t) ),\\&\Delta _{\varvec{\sigma }_{j}}(t) = (\varvec{\sigma }_j(t,{\mathbf {x}}^{\text {o}}_j(t))- \varvec{\sigma }_j(t,{\mathbf {x}}^{\text {p}}_j(t))). \end{aligned}$$
According to It\(\hat{\text {o}}\)’s formula,
$$\begin{aligned} \begin{aligned}&\mathrm{{d}}\Vert {\mathbf {x}}^{\text {o}}_j(t) - {\mathbf {x}}^{\text {p}}_j(t)\Vert ^2\\&= (2({\mathbf {x}}^{\text {o}}_j(t) - {\mathbf {x}}^{\text {p}}_j(t))^{T}\Delta _{{\mathbf {f}}_j}(t) + \Vert \Delta _{\varvec{\sigma }_{j}}(t)\Vert ^2)\mathrm{{d}}t \\&\quad +2({\mathbf {x}}^{\text {o}}_j(t) - {\mathbf {x}}^{\text {p}}_j(t))^{T}\Delta _{\varvec{\sigma }_{j}}(t) \mathrm{{d}}{\mathbf {W}}_j(t). \end{aligned} \end{aligned}$$
(48)
Its solution can be written as
$$\begin{aligned}&\Vert {\mathbf {x}}^{\text {o}}_j(t) - {\mathbf {x}}^{\text {p}}_j(t)\Vert ^2 - \Vert {\mathbf {x}}^{\text {o}}_j(0) - {\mathbf {x}}^{\text {p}}_j(0)\Vert ^2 \\&= \int _{0}^{t}(2({\mathbf {x}}^{\text {o}}_j(s) - {\mathbf {x}}^{\text {p}}_j(s))^{T}\Delta _{{\mathbf {f}}_j}(s) + \Vert \Delta _{\varvec{\sigma }_{j}}(s)\Vert ^2)\mathrm{{d}}s\\&\quad + \int _{0}^{t}2({\mathbf {x}}^{\text {o}}_j(s) - {\mathbf {x}}^{\text {p}}_j(s))^{T}\Delta _{\varvec{\sigma }_{j}}(s) \mathrm{{d}}{\mathbf {W}}_j(s). \end{aligned}$$
After taking expectation with respect to both sides of above formula, and according to Assumption 1 and Cauchy–Schwartz inequality, we have
$$\begin{aligned} \begin{aligned}&{\mathbf {E}}[\Vert {\mathbf {x}}^{\text {o}}_j(t) - {\mathbf {x}}^{\text {p}}_j(t)\Vert ^2] - \Vert {\mathbf {x}}^{\text {o}}_j(0) - {\mathbf {x}}^{\text {p}}_j(0)\Vert ^2 \\&= \int _{0}^{t}(2({\mathbf {x}}^{\text {o}}_j(s) - {\mathbf {x}}^{\text {p}}_j(s))^T\Delta _{{\mathbf {f}}_j}(s) + \Vert \Delta _{\varvec{\sigma }_{j}}(s)\Vert ^2)\mathrm{{d}}s\\&\le \int _{0}^{t}{\mathbf {E}}[\Vert {\mathbf {x}}^{\text {o}}_j(s) - {\mathbf {x}}^{\text {p}}_j(s)\Vert ^2]\mathrm{{d}}s \\&\quad + \int _{0}^{t}{\mathbf {E}}[\Vert \Delta _{{\mathbf {f}}_j}(s)\Vert ^2 + \Vert \Delta _{\varvec{\sigma }_{j}}(s)\Vert ^2]\mathrm{{d}}s\\&\le C_{{\mathbf {f}}}\int _{0}^{t}{\mathbf {E}}[\Vert {\mathbf {x}}^{\text {o}}_{j-1}(s) - {\mathbf {x}}^{\text {p}}_{j-1}(s)\Vert ^2]\mathrm{{d}}s \\&\quad + (C_{{\mathbf {f}}} +C_{\varvec{\sigma }} + 1) \int _{0}^{t}{\mathbf {E}}[\Vert {\mathbf {x}}^{\text {o}}_j(s) - {\mathbf {x}}^{\text {p}}_j(s)\Vert ^2]\mathrm{{d}}s\\&\quad + C_{{\mathbf {f}}} \int _{0}^{t}{\mathbf {E}}[\Vert {\mathbf {x}}^{\text {o}}_{j+1}(s) - {\mathbf {x}}^{\text {p}}_{j+1}(s)\Vert ^2]\mathrm{{d}}s. \end{aligned} \end{aligned}$$
(49)
Equivalently, we can write (49) into the following vector form
$$\begin{aligned} \varvec{\Delta }(t) \preceq \int _{0}^{t}M\cdot \varvec{\Delta }(s)\mathrm{{d}}s + \varvec{\Delta }(0), \end{aligned}$$
where \(\varvec{\Delta }(t)\) is an \(m \times 1\) vector whose jth entry is \({\mathbf {E}}[\Vert \mathbf {x}^{\text {o}}_j(t) - \mathbf {x}^{\text {p}}_j(t)\Vert ^2] \); M is an \(m \times m\) matrix with
$$\begin{aligned}&M_{1,1} =C_{\varvec{\sigma }} + C_{{\mathbf {f}}}+1, M_{1,2} = M_{1,n} =C_{{\mathbf {f}}}, \\&M_{j,j} =C_{\varvec{\sigma }}+ C_{{\mathbf {f}}}+1, M_{j,j+1} = M_{j,j-1} = C_{{\mathbf {f}}},\\&\qquad \text {for} \ j=2,\ldots ,m-1,\\&M_{m,m} =C_{\varvec{\sigma }}+ C_{{\mathbf {f}}}+1, M_{m,1} = M_{m,m-1} = C_{{\mathbf {f}}}, \end{aligned}$$
and other entries are 0; moreover, we have \(\varvec{\Delta }_{i_{\star }}(0) = \Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star } \Vert ^2\) and other entries of \(\varvec{\Delta }(0)\) are 0. According to Lemma 4, we have
$$\begin{aligned} \varvec{\Delta }(t) \preceq \varvec{\Delta }(0) + \int _{0}^{t} \mathrm{{e}}^{M(t-s)}\cdot M \cdot \varvec{\Delta }(0)\mathrm{{d}}s. \end{aligned}$$
We observe that for the m-dimensional column vector \((M \cdot \varvec{\Delta }(0))\), its \(i_{\star }-1, i_{\star }, i_{\star }+1\)th entries are \(C_{{\mathbf {f}}}\Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star } \Vert ^2, (C_{{\mathbf {f}}} + C_{\varvec{\sigma }}+1)\Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star } \Vert ^2 , C_{{\mathbf {f}}}\Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star } \Vert ^2 \), respectively, while other entries are 0. Together with Lemma 1 for \(C=C_d\), \(\max \limits _{j,i = 1,2,\ldots ,m} |M_{j,i}| = (C_{{\mathbf {f}}}+C_{\varvec{\sigma }}+1)\), (34) and the definition of \(C({\mathbf {f}}, \varvec{\sigma })= (\mathrm{{e}}^{C_d}+\mathrm{{e}}^{-C_d}+1)(C_{{\mathbf {f}}}+C_{\varvec{\sigma }}+1)\), we have
$$\begin{aligned}&\varvec{\Delta }_j(t) \\&\le \varvec{\Delta }_j(0) + \int _{0}^{t}|(\mathrm{{e}}^{M(t-s)})_{j,i_{\star }-1}|\cdot (M \cdot \varvec{\Delta }(0) )_{i_{\star }-1,1}\mathrm{{d}}s \\&\quad + \int _{0}^{t}|(\mathrm{{e}}^{M(t-s)})_{j,i_{\star }}|\cdot (M \cdot \varvec{\Delta }(0))_{i_{\star },1}\mathrm{{d}}s \\&\quad + \int _{0}^{t}|(\mathrm{{e}}^{M(t-s)})_{j,i_{\star }+1}|\cdot (M\cdot \varvec{\Delta }(0))_{i_{\star }+1,1}\mathrm{{d}}s \\&\le \varvec{\Delta }_j(0) + (C_{{\mathbf {f}}}+ C_{\varvec{\sigma }}+1)\Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star } \Vert ^2 \int _{0}^{t} \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma }) (t-s) }\mathrm{{d}}s \\&\quad \cdot (\mathrm{{e}}^{-C_d\cdot d(j,i_{\star }-1)} + \mathrm{{e}}^{-C_d\cdot d(j,i_{\star })} + \mathrm{{e}}^{-C_d\cdot d(j,i_{\star }+1)}) \\&\le \varvec{\Delta }_j(0) + \Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star } \Vert ^2 \cdot (\mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma })t }-1) \mathrm{{e}}^{-C_d\cdot d(j,i_{\star })} \\&\le \Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star } \Vert ^2\cdot \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma })\cdot t } \cdot \mathrm{{e}}^{-C_d\cdot d(j,i_{\star })}, \end{aligned}$$
where the last inequality is derived by considering two cases: when \(j=i_{\star }\), \( \varvec{\Delta }_j(0) = \Vert \mathbf {x}^\mathrm{{o}}_{i_\star }-\mathbf {x}^\mathrm{{p}}_{i_\star } \Vert ^2\) and \(d(j,i_{\star }) = 0\), the result is true; when \(j \ne i_{\star }\), \( \varvec{\Delta }_j(0) = 0\), the result can also be verified. Recall that \(\varvec{\Delta }_j(t) = {\mathbf {E}}[\Vert \mathbf {x}^{\text {o}}_j(t) - \mathbf {x}^{\text {p}}_j(t)\Vert ^2] \), the proof is complete. \(\square \)
Lemma 6
Under Assumption 1, with \(\mathbf {x}^{o }(t)\), \(\mathbf {x}^{l }(t)\), \(\mathbf {x}^{p }(t)\) defined for \(t \in [0,T]\) in Sects. 3.1 and 3.2, we have, for \(j=1,\ldots ,m\),
$$\begin{aligned} \begin{aligned}&{\mathbf {E}}[ \Vert \mathbf{x}^\mathrm{l}_{ j}(t) - \mathbf {x}_{ j}^\mathrm{p}(t)\Vert ^2 ]\\&\le C_2({\mathbf {f}}, \varvec{\sigma }) \Vert \mathbf {x}^{o }_{i_\star }-\mathbf {x}^{p }_{i_\star } \Vert ^2 \mathrm{{e}}^{2C_1({\mathbf {f}}, \varvec{\sigma }) t} \mathrm{{e}}^{-C_d\cdot (L+1)}, \end{aligned} \end{aligned}$$
(50)
where \(C_d\) is any given positive constant, \(C_1({\mathbf {f}}, \varvec{\sigma })\), \(C_2({\mathbf {f}}, \varvec{\sigma })\) are defined in (11) and (14).
Proof
For simplicity, we consider \(L+2\le i_{\star } \le n-L-1\), this does not sacrifice any generality because we can always rotate the index to make it true. So \(B_{i_{\star }} = \{ i_{\star }-L, \ldots , i_{\star }+L\}\).
Suppose \(j \in B^c_{i_{\star }}\), since \(\mathbf {x}^{\text {l}}_j(t) = \mathbf {x}^{\text {o}}_j(t)\) and \(d(j,i_{\star })\ge L+1\), (50) follows from the conclusion of Lemma 5.
Suppose \(j \in B_{i_{\star }}\), we observe that both \(\mathbf {x}_j^{\text {l}}(t)\) and \(\mathbf {x}_j^{\text {p}}(t)\) follow the evolutionary system
$$\begin{aligned}&\mathrm{{d}}\mathbf {x}_j(t) ={\mathbf {f}}_{j}(t,\mathbf {x}_{j-1}(t), \mathbf {x}_j(t),\mathbf {x}_{j+1}(t) )\mathrm{{d}}t \\&\qquad \qquad + \varvec{\sigma }_{j}(t,\mathbf {x}_j(t))\mathrm{{d}}{\mathbf {W}}_j(t), \ t \in [0,T], \end{aligned}$$
with initial value \((\mathbf{x}^{\text {o}}_{i_{\star }-L}, \ldots ,\mathbf{x}^{\text {o}}_{i_{\star }-1}, \mathbf{x}^{\text {p}}_{i_{\star }}, \mathbf{x}^{\text {o}}_{i_{\star }+1},\ldots , \mathbf{x}^{\text {o}}_{i_{\star }+L} )^{T}\). But when \(j\in B_{i_{\star }}^{c}\), it is restricted that \(\mathbf {x}_{j}^{\text {l}}(t) = \mathbf {x}_{j}^{\text {o}}(t)\), where \(\mathbf {x}^{\text {o}}(t)\) is the solution of (4) with initial value \((\mathbf {x}_1^{\text {o}}, \ldots , \mathbf {x}_m^{\text {o}})^{T}\). Recall that \(\mathbf {x}^{\text {p}}(t)\) also solves (4), but with a locally perturbed initial value, which can be written as \((\mathbf {x}_1^{\text {o}}, \ldots , \mathbf {x}_{i_{\star -1}}^{\text {o}}, \mathbf {x}_{i_\star }^{\text {p}}, \mathbf {x}_{i_{\star }+1}^{\text {o}} , \ldots , \mathbf {x}_m^{\text {o}})^{T}\). Thus, for \(i_{\star }-L \le j \le i_{\star }+L\), we have
$$\begin{aligned} \mathrm{{d}}({\mathbf {x}}^{\text {l}}_j(t) - {\mathbf {x}}^{\text {p}}_j(t)) = \Delta _{{\mathbf {f}}_j}(t) \mathrm{{d}}t + \Delta _{\varvec{\sigma }_{j}}(t)\mathrm{{d}}{\mathbf {W}}_j(t), \end{aligned}$$
with
$$\begin{aligned}&\Delta _{{\mathbf {f}}_j}(t) = {\mathbf {f}}_j(t, {\mathbf {x}}^{\text {l}}_{j-1}(t), {\mathbf {x}}^{\text {l}}_j(t),{\mathbf {x}}^{\text {l}}_{j+1}(t) ) \\&\qquad \qquad - {\mathbf {f}}_j(t, {\mathbf {x}}^{\text {p}}_{j-1}(t), {\mathbf {x}}^{\text {p}}_j(t),{\mathbf {x}}^{\text {p}}_{j+1}(t) ),\\&\Delta _{\varvec{\sigma }_{j}}(t) = (\varvec{\sigma }_j(t,{\mathbf {x}}^{\text {l}}_j(t))- \varvec{\sigma }_j(t,{\mathbf {x}}^{\text {p}}_j(t))). \end{aligned}$$
According to It\(\hat{\text {o}}\)’s formula,
$$\begin{aligned} \begin{aligned}&\mathrm{{d}}\Vert {\mathbf {x}}^{\text {l}}_j(t) - {\mathbf {x}}^{\text {p}}_j(t)\Vert ^2\\&= (2({\mathbf {x}}^{\text {l}}_j(t) - {\mathbf {x}}^{\text {p}}_j(t))^{T}\Delta _{{\mathbf {f}}_j}(t) + \Vert \Delta _{\varvec{\sigma }_{j}}(t)\Vert ^2)\mathrm{{d}}t \\&\quad +2({\mathbf {x}}^{\text {l}}_j(t) - {\mathbf {x}}^{\text {p}}_j(t))^{T}\Delta _{\varvec{\sigma }_{j}}(t) \mathrm{{d}}{\mathbf {W}}_j(t). \end{aligned} \end{aligned}$$
(51)
Result (51) is similar to (48), solving it and according to (49), we obtain
$$\begin{aligned} \begin{aligned}&{\mathbf {E}}[({\mathbf {x}}^{\text {l}}_j(t) - {\mathbf {x}}^{\text {p}}_j(t))^2] - ({\mathbf {x}}^{\text {l}}_j(0) - {\mathbf {x}}^{\text {p}}_j(0))^2 \\&\le C_{{\mathbf {f}}}\int _{0}^{t}{\mathbf {E}}[({\mathbf {x}}^{\text {l}}_{j-1}(s) - {\mathbf {x}}^{\text {p}}_{j-1}(s))^2]\mathrm{{d}}s \\&\quad + (C_{{\mathbf {f}}} +C_{\varvec{\sigma }} + 1) \int _{0}^{t}{\mathbf {E}}[({\mathbf {x}}^{\text {l}}_j(s) - {\mathbf {x}}^{\text {p}}_j(s))^2]\mathrm{{d}}s \\&\quad + C_{{\mathbf {f}}} \int _{0}^{t}{\mathbf {E}}[({\mathbf {x}}^{\text {l}}_{j+1}(s) - {\mathbf {x}}^{\text {p}}_{j+1}(s))^2]\mathrm{{d}}s. \end{aligned} \end{aligned}$$
(52)
We define an \((2L+1)\times 1\) vector \(\varvec{\Delta }(t) = \big ({\mathbf {E}}[\Vert \mathbf {x}^{\text {l}}_{i_{\star }-L}(t) - \mathbf {x}_{i_{\star }-L}^{\text {p}}(t)\Vert ^2], \ldots , {\mathbf {E}}[\Vert \mathbf {x}^{\text {l}}_{i_{\star }+L}(t) - \mathbf {x}_{i_{\star }+L}^{\text {p}}(t)\Vert ^2] \big )^{T}\), whose jth element \(\varvec{\Delta }_j(t)= {\mathbf {E}}[\Vert \mathbf {x}^{\text {l}}_{i_{\star }-L+j-1}(t) - \mathbf {x}_{i_{\star }-L+j-1}^{\text {p}}(t)\Vert ^2]\), and observe that \(\varvec{\Delta }(0) = {\mathbf {0}}_{(2L+1)\times 1}\). Since \(\mathbf {x}^{\text {l}}_{i_{\star }-L-1}(t) = \mathbf {x}^{\text {o}}_{i_{\star }-L-1}(t)\) and \(\mathbf {x}^{\text {l}}_{i_{\star }+L+1}(t) = \mathbf {x}^{\text {o}}_{i_{\star }+L+1}(t)\), we can write (52) as the following vector form
$$\begin{aligned} \varvec{\Delta }(t) \preceq \int _{0}^{t}M \cdot \varvec{\Delta }(s) \mathrm{{d}}s + \varvec{\Delta }(t), \end{aligned}$$
(53)
where M is a \((2L+1) \times (2L+1)\) tridiagonal matrix with
$$\begin{aligned}&M_{1,1} = C_{{\mathbf {f}}}+C_{\varvec{\sigma }}+1, \ M_{1,2} =C_{{\mathbf {f}}}, \\&M_{j,j} =C_{{\mathbf {f}}}+C_{\varvec{\sigma }}+1, \ M_{j,j+1} =M_{j,j-1} =C_{{\mathbf {f}}},\\&\qquad \ \text {for} \ 2 \le j \le 2L \\&M_{2L+1,2L} = C_{{\mathbf {f}}}, \ M_{2L+1,2L+1} = C_{{\mathbf {f}}}+C_{\varvec{\sigma }}+ 1, \end{aligned}$$
and \(\varvec{\Delta }(t)\) is an \((2L+1)\)-dimensional column vector with \(\varvec{\Delta }_{1}(t) =C_{{\mathbf {f}}}\int _{0}^{t} {\mathbf {E}}[\Vert \mathbf {x}^{\text {o}}_{i_{\star }-L-1}(s) - \mathbf {x}_{i_{\star }-L-1}^{\text {p}}(s)\Vert ^2]\mathrm{{d}}s\), \(\varvec{\Delta }_{2L+1}(t) = C_{{\mathbf {f}}}\int _{0}^{t}{\mathbf {E}}[\Vert \mathbf {x}^{\text {o}}_{i_{\star }+L+1}(s) - \mathbf {x}_{i_{\star }+L+1}^{\text {p}}(s)\Vert ^2]\mathrm{{d}}s\), while other entries are 0. According to Lemma 5, we have
$$\begin{aligned} \begin{aligned}&\max _{j=1,\ldots ,2L+1}{\{|\varvec{\Delta }_{j}(t)|\}} \\&\quad \le C_{{\mathbf {f}}} \int _{0}^{t}\Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star } \Vert ^2 \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma })\cdot s} \mathrm{{e}}^{-C_d(L+1)}\mathrm{{d}}s \\&\quad \le \dfrac{C_{{\mathbf {f}}}}{C_1({\mathbf {f}}, \varvec{\sigma })}\Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star } \Vert ^2 \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma })\cdot t} \mathrm{{e}}^{-C_d(L+1)}. \end{aligned} \end{aligned}$$
(54)
By Lemma 4, under (53), we have
$$\begin{aligned} \varvec{\Delta }(t) \preceq \varvec{\Delta }(t) + \int _{0}^{t} \mathrm{{e}}^{M(t-s)}\cdot M \cdot \varvec{\Delta }(s)\mathrm{{d}}s. \end{aligned}$$
(55)
For the \((2L+1)\times 1\) vector \((M \cdot \varvec{\Delta }(t))\), we observe
$$\begin{aligned}&(M \cdot \varvec{\Delta }(t))_{1} = (C_{{\mathbf {f}}} + C_{\varvec{\sigma }}+1)\varvec{\Delta }_{1}(t) , (M \cdot \varvec{\Delta }(t))_{2} =C_{{\mathbf {f}}}\varvec{\Delta }_{1}(t),\\&(M \cdot \varvec{\Delta }(t))_{2L} = C_{{\mathbf {f}}}\varvec{\Delta }_{2L+1}(t) , \\&(M \cdot \varvec{\Delta }(t))_{2L+1} =(C_{{\mathbf {f}}} + C_{\varvec{\sigma }}+1)\varvec{\Delta }_{2L+1}(t), \end{aligned}$$
while other entries are 0. Together with (54), we have
$$\begin{aligned} \begin{aligned}&\max _{j=1,\ldots ,2L+1}{\{|(M \cdot \varvec{\Delta }(t))_{j,1}|\}} \\&\quad \le \dfrac{C_{{\mathbf {f}}}}{C_1({\mathbf {f}}, \varvec{\sigma })}(C_{{\mathbf {f}}} + C_{\varvec{\sigma }}+1)\Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star } \Vert ^2 \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma })\cdot t} \mathrm{{e}}^{-C_d(L+1)}. \end{aligned} \end{aligned}$$
(56)
Since \(\max \limits _{j,i=1,\ldots ,2L+1} M_{j,i} = (C_{{\mathbf {f}}} + C_{\varvec{\sigma }}+1)\), according to Lemma 3 and the conclusion of Lemma 1 with \(C=C_d\), we have
$$\begin{aligned} \begin{aligned} |(\mathrm{{e}}^{M(t-s)})_{j,i}| \le |(\mathrm{{e}}^{Mt})_{j,i}| \le \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma })t}. \end{aligned} \end{aligned}$$
(57)
From (55), together with (54), (56) and (57), we obtain
$$\begin{aligned}&\varvec{\Delta }_j(t) \\&\quad \le |\varvec{\Delta }_j(t)| + \sum _{k=1,2,2L,2L+1 }\int _{0}^{t}|(\mathrm{{e}}^{M(t-s)})_{j,k}| |(M\cdot \varvec{\Delta }(s))_{k,1}|\mathrm{{d}}s\\&\quad \le \max _{j=1,\ldots ,2L+1}{\{|\varvec{\Delta }_{j}(t)|\}} \\&\qquad + 4\mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma })t}\int _{0}^{t}\max _{j=1,\ldots ,2L+1}{\{|(M \cdot \varvec{\Delta }(s))_{j,1}|\}}\mathrm{{d}}s \\&\quad \le \dfrac{C_{{\mathbf {f}}}}{C_1({\mathbf {f}}, \varvec{\sigma })}\Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star } \Vert ^2 \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma })\cdot t} \mathrm{{e}}^{-C_d(L+1)} \\&\qquad \cdot (1+ 4\int _{0}^{t}(C_{{\mathbf {f}}} + C_{\varvec{\sigma }}+1) \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma })\cdot s}\mathrm{{d}}s ) \\&\quad \le \dfrac{C_{{\mathbf {f}}}}{C_1({\mathbf {f}}, \varvec{\sigma })}\Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star } \Vert ^2 \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma })\cdot t} \mathrm{{e}}^{-C_d(L+1)} \\&\qquad \cdot (1+\dfrac{4(\mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma })t}-1)}{\mathrm{{e}}^{C_d}+\mathrm{{e}}^{-C_d}+1}) \\&\quad \le \dfrac{2C_{{\mathbf {f}}}}{C_1({\mathbf {f}}, \varvec{\sigma })} \Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star } \Vert ^2 \mathrm{{e}}^{2C_1({\mathbf {f}}, \varvec{\sigma })\cdot t} \mathrm{{e}}^{-C_d(L+1)}, \end{aligned}$$
where the last inequality is derived by using \(\mathrm{{e}}^{C_d}+\mathrm{{e}}^{-C_d}+1 \ge 2\). Recall the definition of \(\varvec{\Delta }_j(t)\), the proof is complete. \(\square \)
Lemma 7
Under the same settings in Lemma 6, for any given \(\epsilon >0\), if the local domain radius L satisfies
$$\begin{aligned} L \ge \dfrac{\log {\left( \dfrac{\epsilon }{C_2({\mathbf {f}}, \varvec{\sigma }) \Vert \mathbf {x}^{o }_{i_\star }-\mathbf {x}^{p }_{i_\star }\Vert ^2} \right) }}{-C_d} + \dfrac{2C_1({\mathbf {f}}, \varvec{\sigma })}{C_d}\cdot T, \end{aligned}$$
then \({\mathbf {E}}[ \Vert {\mathbf {x}}^\mathrm{{l}}_j(t) - \mathbf {x}_j^\mathrm{{p}}(t)\Vert ^2 ]\le \epsilon \) for all \(t\le T\) and \(j=1,\ldots ,m\).
Proof
According to Lemma 6, it is equivalent for us to solve
$$\begin{aligned} C_2({\mathbf {f}}, \varvec{\sigma }) \cdot \Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star } \Vert ^2 \mathrm{{e}}^{2C_1({\mathbf {f}}, \varvec{\sigma })\cdot t} \mathrm{{e}}^{-C_d(L+1)} \le \epsilon \end{aligned}$$
for \(t \in [0,T]\), which can be obtained by solving
$$\begin{aligned}&\mathrm{{e}}^{-C_d(L+1)} \le \dfrac{\epsilon }{C_2({\mathbf {f}}, \varvec{\sigma })\cdot \Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star }\Vert ^2 \cdot \mathrm{{e}}^{2C_1({\mathbf {f}}, \varvec{\sigma })\cdot T } }. \end{aligned}$$
After taking log for both sides, we obtain that the above result is true if
$$\begin{aligned} L \ge \dfrac{\log {\left( \dfrac{\epsilon }{C_2({\mathbf {f}}, \varvec{\sigma }) \Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star }\Vert ^2} \right) }}{-C_d} + \dfrac{2C_1({\mathbf {f}}, \varvec{\sigma })}{C_d}\cdot T. \end{aligned}$$
The proof is finished. \(\square \)
Lemma 8
Under Assumption 1, with \(\tilde{\mathbf {x}}^{o }_j(ih)\), \(\tilde{\mathbf {x}}^{p }_j(ih)\), for \(j=1,\ldots ,m\) and \(i=0,\ldots ,T/h\), defined in Sect. 3.3, we have
$$\begin{aligned} \begin{aligned}&\Vert \tilde{\mathbf {x}}^{o }_j(ih) - \tilde{\mathbf {x}}^{p }_j(ih) \Vert ^2\\&\quad \le \mathrm{{e}}^{-C_d\cdot d(j,i_{\star })}\cdot \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma })( 1+h)ih} \Vert \mathbf {x}^{o }_{i_\star }-\mathbf {x}^{p }_{i_\star }\Vert ^2, \end{aligned} \end{aligned}$$
(58)
where \(C_d\) is any given positive constant, \(C_1({\mathbf {f}}, \varvec{\sigma })\) is defined in (11).
Proof
Since \(\tilde{{\mathbf {x}}}^{\text {o}}_{j}(ih)\) are obtained by iterating
$$\begin{aligned} \begin{aligned} \tilde{{\mathbf {x}}}^{\text {o}}_j((i+1)h)&= \tilde{{\mathbf {x}}}^{\text {o}}_j(ih) +\varvec{\sigma }_j(ih,\tilde{{\mathbf {x}}}^{\text {o}}_j(ih))\sqrt{h}W_{i,j} \\&\quad + {\mathbf {f}}_j(ih,\tilde{{\mathbf {x}}}^{\text {o}}_{j-1}(ih), \tilde{{\mathbf {x}}}^{\text {o}}_j(ih),\tilde{{\mathbf {x}}}^{\text {o}}_{j+1}(ih) )h, \end{aligned} \end{aligned}$$
(59)
with initial value \(\tilde{{\mathbf {x}}}^{\text {o}}_{j}(0)={\mathbf {x}}^{\text {o}}_{j}\). Comparing it with (16), we have
$$\begin{aligned} \begin{aligned}&\tilde{{\mathbf {x}}}^{\text {o}}_j((i+1)h) - \tilde{{\mathbf {x}}}^{\text {p}}_j((i+1)h)\\&\quad = \tilde{{\mathbf {x}}}^{\text {o}}_j(ih) - \tilde{{\mathbf {x}}}^{\text {p}}_j(ih) +{\tilde{\Delta }}_{\varvec{\sigma }_j} (ih) \sqrt{h}W_{i,j} + {\tilde{\Delta }}_{{\mathbf {f}}_j} (ih) h, \end{aligned} \end{aligned}$$
(60)
with
$$\begin{aligned} {\tilde{\Delta }}_{\varvec{\sigma }_j} (ih)&= \varvec{\sigma }_j(ih,\tilde{{\mathbf {x}}}^{\text {o}}_j(ih)) - \varvec{\sigma }_j(ih,\tilde{{\mathbf {x}}}^{\text {p}}_j(ih)),\\ {\tilde{\Delta }}_{{\mathbf {f}}_j} (ih)&= {\mathbf {f}}_j(ih,\tilde{{\mathbf {x}}}^{\text {o}}_{j-1}(ih), \tilde{{\mathbf {x}}}^{\text {o}}_j(ih),\tilde{{\mathbf {x}}}^{\text {o}}_{j+1}(ih) ) \\&\quad - {\mathbf {f}}_j(ih,\tilde{{\mathbf {x}}}^{\text {p}}_{j-1}(ih), \tilde{{\mathbf {x}}}^{\text {p}}_j(ih),\tilde{{\mathbf {x}}}^{\text {p}}_{j+1}(ih) ). \end{aligned}$$
Since \(\tilde{\mathbf {x}}_j^{\text {o}}(ih)\) and \(\tilde{\mathbf {x}}_j^{\text {p}}(ih)\) are independent of \(W_{i,j}\), together with Assumption 1, and Cauchy–Schwartz inequality, from (60), we have
$$\begin{aligned} \begin{aligned}&{\mathbf {E}}[\Vert \tilde{\mathbf {x}}_j^{\text {o}}((i+1)h) - \tilde{\mathbf {x}}_j^{\text {p}}((i+1)h)\Vert ^2] \\&\quad = {\mathbf {E}}[\Vert \tilde{\mathbf {x}}_j^{\text {o}}(ih) - \tilde{\mathbf {x}}_j^{\text {p}}(ih) \Vert ^2] + {\mathbf {E}}[\Vert {\tilde{\Delta }}_{\varvec{\sigma }_j} (ih) \sqrt{h}W_{i,j} \Vert ^2] \\&\qquad + {\mathbf {E}}[\Vert {\tilde{\Delta }}_{{\mathbf {f}}_j} (ih) h\Vert ^2] + 2{\mathbf {E}}[(\tilde{\mathbf {x}}_j^{\text {o}}(ih) - \tilde{\mathbf {x}}_j^{\text {p}}(ih))^{T} {\tilde{\Delta }}_{{\mathbf {f}}_j} (ih) h] \\&\quad \le {\mathbf {E}}[\Vert \tilde{\mathbf {x}}_j^{\text {o}}(ih) - \tilde{\mathbf {x}}_j^{\text {p}}(ih) \Vert ^2] + {\mathbf {E}}[\Vert {\tilde{\Delta }}_{\varvec{\sigma }_j} (ih)\Vert ^2] h\\&\qquad + {\mathbf {E}}[\Vert {\tilde{\Delta }}_{{\mathbf {f}}_j} (ih) \Vert ^2] h^2 \\&\qquad + ( {\mathbf {E}}[\Vert \tilde{\mathbf {x}}_j^{\text {o}}(ih) - \tilde{\mathbf {x}}_j^{\text {p}}(ih) \Vert ^2] + {\mathbf {E}}[\Vert {\tilde{\Delta }}_{{\mathbf {f}}_j} (ih) \Vert ^2] )h \\&\quad \le (C_{{\mathbf {f}}}h+C_{{\mathbf {f}}}h^2 ){\mathbf {E}}[(\tilde{\mathbf {x}}^{\text {o}}_{j-1}(ih) - \tilde{\mathbf {x}}^{\text {p}}_{j-1}(ih) )^2] +\\&\qquad (1+(C_{\varvec{\sigma }} + C_{{\mathbf {f}}}+1)h +C_{{\mathbf {f}}}h^2) {\mathbf {E}}[(\tilde{\mathbf {x}}_j^{\text {o}}(ih) - \tilde{\mathbf {x}}_j^{\text {p}}(ih) )^2] \\&\qquad +(C_{{\mathbf {f}}}h+C_{{\mathbf {f}}}h^2 ) {\mathbf {E}}[\Vert \tilde{\mathbf {x}}^{\text {o}}_{j+1}(ih) - \tilde{\mathbf {x}}^{\text {p}}_{j+1}(ih) \Vert ^2]. \end{aligned} \end{aligned}$$
(61)
Equivalently, we have
$$\begin{aligned} \varvec{\Delta }((i+1)h) \preceq ({\mathbf {I}}_{m} + M) \cdot \varvec{\Delta }(ih), \end{aligned}$$
(62)
where \(\varvec{\Delta }(ih)\) is defined to be an \(m\times 1\) vector with its jth entry \(\varvec{\Delta }_j(ih) = {\mathbf {E}}[\Vert \tilde{\mathbf {x}}_j^{\text {o}}(ih) - \tilde{\mathbf {x}}_j^{\text {p}}(ih)\Vert ^2] \) for \(i=0,\ldots ,T/h\); M is an \(m \times m\) matrix with
$$\begin{aligned}&M_{1,1} =(C_{\varvec{\sigma }} + C_{{\mathbf {f}}}+1)h +C_{{\mathbf {f}}}h^2, M_{1,2} =C_{{\mathbf {f}}}h+C_{{\mathbf {f}}}h^2, \\&M_{1,m} =C_{{\mathbf {f}}}h+C_{{\mathbf {f}}}h^2, \\&M_{j,j-1} =C_{{\mathbf {f}}}h+C_{{\mathbf {f}}}h^2, M_{j,j} =(C_{\varvec{\sigma }}+ C_{{\mathbf {f}}}+1)h +C_{{\mathbf {f}}}h^2, \\&M_{j,j+1} = C_{{\mathbf {f}}}h+C_{{\mathbf {f}}}h^2, \quad \text {for} \ j=2,\ldots ,m-1,\\&M_{m,1} = C_{{\mathbf {f}}}h+C_{{\mathbf {f}}}h^2,M_{m,m-1} = C_{{\mathbf {f}}}h+C_{{\mathbf {f}}}h^2, \\&M_{m,m} =(C_{\varvec{\sigma }}+ C_{{\mathbf {f}}}+1)h +C_{{\mathbf {f}}}h^2, \end{aligned}$$
and other entries are 0. After iterating (62) for i times, we have
$$\begin{aligned} \varvec{\Delta }(ih) \preceq ({\mathbf {I}}_{m} + M)^{i} \cdot \varvec{\Delta }(0), \end{aligned}$$
(63)
and for \(\varvec{\Delta }(0)\), we know \(\varvec{\Delta }_{i_{\star }}(0) = \Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star }\Vert ^2\) and other entries are 0. We observe
$$\begin{aligned} {\mathbf {I}}_{m}+M \preceq \mathrm{{e}}^{M}, \end{aligned}$$
(64)
together with Lemma 1, \(\max \limits _{j,k = 1,2,\ldots ,m} |(iM)_{j,k}| = ((C_{\varvec{\sigma }}+C_{{\mathbf {f}}}+1) + C_{{\mathbf {f}}}h)ih\) and (41), we have, for \(j=1,\ldots ,m\),
$$\begin{aligned} \begin{aligned} |(({\mathbf {I}}_{m}+ M)^{i})_{j,i_{\star }}|&\le |((\mathrm{{e}}^{M})^i)_{j,i_{\star }}| = |(\mathrm{{e}}^{iM})_{j,i_{\star }}| \\&\le \mathrm{{e}}^{-C_d\cdot d(j,i_{\star })}\cdot \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma }) (1+h)ih}. \end{aligned} \end{aligned}$$
(65)
Substituting (65) into (63) results in (58). \(\square \)
Lemma 9
Under Assumption 1, with \(\tilde{\mathbf {x}}^{o }_j(ih)\)\(\tilde{\mathbf {x}}^{l }_j(ih)\), \(\tilde{\mathbf {x}}^{p }_j(ih)\), for \(j=1,\ldots ,m\) and \(i=0,\ldots ,T/h\), defined in Sect. 3.3, we have
$$\begin{aligned} \begin{aligned}&{\mathbf {E}}[\Vert \tilde{\mathbf {x}}^\mathrm{{l}}_j(ih) - \tilde{\mathbf {x}}_j^\mathrm{{p}}(ih)\Vert ^2] \\&\le C_2({\mathbf {f}}, \varvec{\sigma }) \mathrm{{e}}^{2C_1({\mathbf {f}}, \varvec{\sigma }) (1+h)ih} \mathrm{{e}}^{-C_d(L+1)} \Vert \mathbf {x}^{o }_{i_\star }-\mathbf {x}^{p }_{i_\star }\Vert ^2, \end{aligned} \end{aligned}$$
(66)
where \(C_d\) is any given positive constant, \(C_1({\mathbf {f}}, \varvec{\sigma })\) and \(C_2({\mathbf {f}}, \varvec{\sigma })\) are defined in (11) and (14).
Proof
Suppose \(j \in B^c_{i_{\star }}\), since \(d(j,i_{\star })\ge L+1\), and \(\tilde{\mathbf {x}}^{\text {l}}_j(ih) = \tilde{\mathbf {x}}^{\text {o}}_j(ih) \), the result follows from Lemma 8.
Suppose \(j \in B_{i_{\star }} \), comparing (16) and (18), we have
$$\begin{aligned} \begin{aligned}&\tilde{{\mathbf {x}}}^{\text {l}}_j((i+1)h) - \tilde{{\mathbf {x}}}^{\text {p}}_j((i+1)h)\\&\quad = \tilde{{\mathbf {x}}}^{\text {l}}_j(ih) - \tilde{{\mathbf {x}}}^{\text {p}}_j(ih) +{\tilde{\Delta }}_{\varvec{\sigma }_j} (ih) \sqrt{h}W_{i,j} + {\tilde{\Delta }}_{{\mathbf {f}}_j} (ih) h, \end{aligned} \end{aligned}$$
(67)
with
$$\begin{aligned} {\tilde{\Delta }}_{\varvec{\sigma }_j} (ih)&= \varvec{\sigma }_j(ih,\tilde{{\mathbf {x}}}^{\text {l}}_j(ih)) - \varvec{\sigma }_j(ih,\tilde{{\mathbf {x}}}^{\text {p}}_j(ih)),\\ {\tilde{\Delta }}_{{\mathbf {f}}_j} (ih)&= {\mathbf {f}}_j(ih,\tilde{{\mathbf {x}}}^{\text {l}}_{j-1}(ih), \tilde{{\mathbf {x}}}^{\text {l}}_j(ih),\tilde{{\mathbf {x}}}^{\text {l}}_{j+1}(ih) ) \\&\quad - {\mathbf {f}}_j(ih,\tilde{{\mathbf {x}}}^{\text {p}}_{j-1}(ih), \tilde{{\mathbf {x}}}^{\text {p}}_j(ih),\tilde{{\mathbf {x}}}^{\text {p}}_{j+1}(ih) ). \end{aligned}$$
And it is required that \(\tilde{\mathbf {x}}_{k}^{\text {l}}(ih) = \tilde{\mathbf {x}}_{k}^{\text {o}}(ih)\) for \(k\in B_{i_{\star }}^{c}\) during the evolution of \(\tilde{{\mathbf {x}}}^{\text {l}}(ih)\). Since (67) is similar to (60), according to (61), we have
$$\begin{aligned}&{\mathbf {E}}[\Vert \tilde{\mathbf {x}}_j^{\text {l}}((i+1)h) - \tilde{\mathbf {x}}_j^{\text {p}}((i+1)h)\Vert ^2] \\&\le (C_{{\mathbf {f}}}h+C_{{\mathbf {f}}}h^2 ){\mathbf {E}}[(\tilde{\mathbf {x}}^{\text {l}}_{j-1}(ih) - \tilde{\mathbf {x}}^{\text {p}}_{j-1}(ih) )^2] \\&\quad + (1+(C_{\varvec{\sigma }} + C_{{\mathbf {f}}}+1)h +C_{{\mathbf {f}}}h^2) {\mathbf {E}}[(\tilde{\mathbf {x}}_j^{\text {l}}(ih) - \tilde{\mathbf {x}}_j^{\text {p}}(ih) )^2] \\&\quad +(C_{{\mathbf {f}}}h+C_{{\mathbf {f}}}h^2 ) {\mathbf {E}}[\Vert \tilde{\mathbf {x}}^{\text {l}}_{j+1}(ih) - \tilde{\mathbf {x}}^{\text {p}}_{j+1}(ih) \Vert ^2]. \end{aligned}$$
Namely, for the \((2L+1) \times 1\) vector \(\varvec{\Delta }(ih)\) whose jth entry \(\varvec{\Delta }_j(ih) = {\mathbf {E}}[ \Vert \tilde{\mathbf {x}}^{\text {l}}_{i_{\star }-L+j-1}(ih) - \tilde{\mathbf {x}}^{\text {p}}_{i_{\star }-L+j-1}(ih)\Vert ^2]\) for \(j=1,\ldots ,2L+1\), we have
$$\begin{aligned} \varvec{\Delta }((i+1)h) \preceq ({\mathbf {I}}_{(2L+1)} + M) \cdot \varvec{\Delta }(ih) + \varvec{\Delta }(ih), \end{aligned}$$
(68)
where M is a \((2L+1)\) by \((2L+1)\) tridiagonal matrix with
$$\begin{aligned}&M_{1,1} = (C_{\varvec{\sigma }} + C_{{\mathbf {f}}}+1)h +C_{{\mathbf {f}}}h^2, M_{1,2} =C_{{\mathbf {f}}}h+C_{{\mathbf {f}}}h^2, \\&M_{j,j-1} =C_{{\mathbf {f}}}h+C_{{\mathbf {f}}}h^2, M_{j,j} =(C_{\varvec{\sigma }} + C_{{\mathbf {f}}}+1)h +C_{{\mathbf {f}}}h^2, \\&M_{j,j+1} = C_{{\mathbf {f}}}h+C_{{\mathbf {f}}}h^2, \qquad \text {for} \ 2 \le j \le 2L \\&M_{2L+1,2L} = C_{{\mathbf {f}}}h+C_{{\mathbf {f}}}h^2, \\&M_{2L+1,2L+1} = (C_{\varvec{\sigma }} + C_{{\mathbf {f}}}+1)h +C_{{\mathbf {f}}}h^2, \end{aligned}$$
and \(\varvec{\Delta }(ih)\) is a \((2L+1)\)-dimensional vector with
$$\begin{aligned} \mathbf {{\delta }}_{1}(ih) =(C_{{\mathbf {f}}}+C_{{\mathbf {f}}}h)h {\mathbf {E}}[\Vert \tilde{\mathbf {x}}^{\text {o}}_{i_{\star }-L-1}(ih) - \tilde{\mathbf {x}}^{\text {p}}_{i_{\star }-L-1}(ih)\Vert ^2],\\ \varvec{\Delta }_{2L+1}(ih) = (C_{{\mathbf {f}}}+C_{{\mathbf {f}}}h)h{\mathbf {E}}[\Vert \tilde{\mathbf {x}}^{\text {o}}_{i_{\star }+L+1}(ih) - \tilde{\mathbf {x}}_{i_{\star }+L+1}^{\text {p}}(ih)\Vert ^2] \end{aligned}$$
and other entries are 0. After iterating (68) for i times, we obtain
$$\begin{aligned} \begin{aligned} \varvec{\Delta }(ih)&\preceq \sum _{k=0}^{i-1}({\mathbf {I}}_{2L+1} + M)^{i-1-k} \cdot \varvec{\Delta }(kh) \\&\quad + ({\mathbf {I}}_{2L+1} + M)^{i}\cdot \varvec{\Delta }(0), \end{aligned} \end{aligned}$$
(69)
and we see \(\varvec{\Delta }(0) = {\mathbf {0}}_{(2L+1) \times 1}\) from the definition of \(\tilde{\mathbf {x}}^{\text {l}}_j(ih)\), \(\tilde{\mathbf {x}}^{\text {p}}_j(ih)\). According to (64), (41), together with Lemma 1, \(\max \limits _{j,l = 1,2,\ldots ,2L+1} |(iM)_{j,k}| = ((C_{\varvec{\sigma }}+C_{{\mathbf {f}}}+1) + C_{{\mathbf {f}}}h)ih\), for \(l=1,\ldots ,2L+1\), we have
$$\begin{aligned} \begin{aligned}&|(({\mathbf {I}}_{2L+1}+ M)^{i-1-k})_{j,l}| \\&\le |((\mathrm{{e}}^{M})^{(i-1-k)})_{j,l}| = |(\mathrm{{e}}^{(i-1-k)M})_{j,l}| \\&\le \mathrm{{e}}^{-C_d\cdot d(j,l)} \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma }) (1+h)(i-1-k)h} \le \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma }) (1+h)(i-1-k)h}. \end{aligned} \end{aligned}$$
(70)
According to Lemma (8), we have
$$\begin{aligned} \begin{aligned}&\max \{ |\varvec{\Delta }_{1}(kh)| , |\varvec{\Delta }_{2L+1}(kh)| \} \\&\le (C_{{\mathbf {f}}}h+C_{{\mathbf {f}}}h^2) \mathrm{{e}}^{-C_d(L+1)} \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma }) (1+h)kh} \Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star }\Vert ^2. \end{aligned} \end{aligned}$$
(71)
By applying Taylor’s theorem, we have
$$\begin{aligned} \begin{aligned} \sum _{k=0}^{i-1} \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma }) (1+h)kh}&= \dfrac{ \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma }) (1+h)ih}-1}{\mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma }) (1+h)h}-1}\\&\le \dfrac{ \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma }) (1+h)ih}-1}{C_1({\mathbf {f}}, \varvec{\sigma }) (1+h)h}. \end{aligned} \end{aligned}$$
(72)
Substituting (70), (71), (72) into (69), we have, for \(j=1,\ldots ,2L+1\),
$$\begin{aligned}&\varvec{\Delta }_j(ih) \\&\quad \le \sum _{k=0}^{i-1}(({\mathbf {I}}_{(2L+1)} + M)^{i-1-k})_{j,1} \cdot \varvec{\Delta }_1(kh) \\&\qquad + \sum _{k=0}^{i-1}(({\mathbf {I}}_{(2L+1)} + M)^{i-1-k})_{j,2L+1} \cdot \varvec{\Delta }_{2L+1}(kh)\\&\quad \le 2\cdot \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma }) (1+h)ih}\cdot \mathrm{{e}}^{-C_d(L+1)} \cdot \Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star }\Vert ^2\\&\quad \quad \cdot (C_{{\mathbf {f}}}h+C_{{\mathbf {f}}}h^2) \cdot \sum _{k=0}^{i-1} \mathrm{{e}}^{C_1({\mathbf {f}}, \varvec{\sigma }) (1+h)kh} \\&\quad \le \dfrac{2C_{{\mathbf {f}}}}{C_1({\mathbf {f}}, \varvec{\sigma })} \mathrm{{e}}^{2C_1({\mathbf {f}}, \varvec{\sigma }) (1+h)ih} \mathrm{{e}}^{-C_d(L+1)} \Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star }\Vert ^2. \end{aligned}$$
Recall the definition of \(\varvec{\Delta }_j(ih)\), the proof is complete. \(\square \)
Lemma 10
Under the same settings in Lemma 9, given any positive constant \(\epsilon \), if only
$$\begin{aligned} L \ge \dfrac{\log {\left( \dfrac{\epsilon }{C_2({\mathbf {f}}, \varvec{\sigma }) \Vert {\mathbf {x}}^{o }_{i_{\star }} - {\mathbf {x}}^{p }_{i_{\star }}\Vert ^2} \right) }}{-C_d}+\dfrac{2C_1({\mathbf {f}}, \varvec{\sigma }) (1+h)}{C_d}T, \end{aligned}$$
we have \({\mathbf {E}}[\Vert {\tilde{\mathbf {x}}}^{l }_j(ih) - \tilde{\mathbf {x}}_j^{p }(ih)\Vert ^2] \le \epsilon \) for \(j=1,\ldots ,m\) and \(i=0,\ldots ,T/h\).
Proof
According to Lemma 9, we only need to solve
$$\begin{aligned} C_2({\mathbf {f}}, \varvec{\sigma }) \cdot \mathrm{{e}}^{2C_1({\mathbf {f}}, \varvec{\sigma }) (1+h)ih} \cdot \mathrm{{e}}^{-C_d(L+1)} \cdot \Vert \mathbf {x}^{\text {o}}_{i_\star }-\mathbf {x}^{\text {p}}_{i_\star }\Vert ^2\le \epsilon , \end{aligned}$$
for \(i=0,\ldots ,T/h\), which can be obtained by solving
$$\begin{aligned} \mathrm{{e}}^{-C_d(L+1)} \le \dfrac{\epsilon }{C_2({\mathbf {f}}, \varvec{\sigma })\cdot \mathrm{{e}}^{2C_1({\mathbf {f}}, \varvec{\sigma }) (1+h)T} \cdot \Vert {\mathbf {x}}^{\text {o}}_{i_{\star }} - {\mathbf {x}}^{\text {p}}_{i_{\star }}\Vert ^2}. \end{aligned}$$
After taking log for both sides, we have
$$\begin{aligned} L \ge \dfrac{\log {\left( \dfrac{\epsilon }{C_2({\mathbf {f}}, \varvec{\sigma })\Vert {\mathbf {x}}^{\text {o}}_{i_{\star }} - {\mathbf {x}}^{\text {p}}_{i_{\star }}\Vert ^2} \right) }}{-C_d}+\dfrac{2C_1({\mathbf {f}}, \varvec{\sigma }) (1+h)}{C_d}T. \end{aligned}$$
The proof is complete. \(\square \)
1.3 Proofs of propositions and theorems
The proof of Proposition 1:
It is a direct result of Lemma 5. \(\square \)
The proof of Theorem 1:
The conclusions are direct results of Lemma 6, 7. \(\square \)
The proof of Theorem 2:
The conclusions are direct results of Lemma 9, 10. \(\square \)