Skip to main content
Log in

Acceleration of automatic differentiation of solutions to parabolic partial differential equations: a higher order discretization

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

The paper proposes a new automatic/algorithmic differentiation for the solutions to partial differential equations of parabolic type. In particular, we provide a higher order discretization scheme which is a natural extension of the standard automatic differentiation. A Brownian polynomial approach is introduced to avoid the Lévy area simulation. The Lie brackets of vector fields associated with stochastic differential equation play an important role in the proposed scheme. The case that the test function is non-smooth but has Gateaux derivative is considered. Numerical examples are shown to confirm the effectiveness

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Capriotti, L.: Fast Greeks by algorithmic differentiation. J. Comput. Finance 14(3), 3–35 (2011)

    Article  Google Scholar 

  2. Giles, M., Glasserman, P.: Smoking adjoints: fast Monte Carlo Greeks. Risk 19(1), 88–92 (2006)

    Google Scholar 

  3. Griewank, A., Walther, A.: Evaluating derivatives: principles and techniques of algorithmic differentiation, Society for Industrial and Applied Mathematics (2008)

  4. Ikeda, N., Watanabe, S.: Stochastic Differential Equations and Diffusion Processes, 2nd ed. North-Holland Mathematical Library (1989)

  5. Kusuoka, S.: Approximation of expectation of diffusion process and mathematical finance. Advanced Studies in Pure Mathematics 31, 147–165 (2001)

    Article  MathSciNet  Google Scholar 

  6. Kusuoka, S.: Approximation of expectation of diffusion processes based on Lie algebra and Malliavin calculus. Advances in Mathematical Economics, Springer, pp. 69–83 (2004)

  7. Kusuoka, S., Stroock, D.: Applications of the Malliavin calculus, part I, Stochastic Analysis edited by K. Itô (Katata/Kyoto 1982), pp. 271–306 (1984)

  8. Naito, R., Yamada, T.: A third-order weak approximation of multidimensional Itô stochastic differential equations. Monte Carlo Methods and Applications 25(2), 97–120 (2019)

    Article  MathSciNet  Google Scholar 

  9. Nualart, D: The Malliavin Calculus and Related Topics. Springer (2006)

  10. Pagès, G.: Numerical Probability. Springer (2018)

  11. Takahashi, A.: Asymptotic Expansion Approach in Finance, Large Deviations and Asymptotic Methods in Finance, Eds. P. Friz, J. Gatheral, A. Gulisashvili, A Jacquier and J. Teichmann. Springer Proceedings in Mathematics & Statistics (2015)

  12. Takahashi, A., Yamada, T.: An asymptotic expansion with push-down of Malliavin weights. SIAM J. Financ. Math. 3(1), 95–136 (2012)

    Article  MathSciNet  Google Scholar 

  13. Takahashi, A., Yamada, T.: A weak approximation with asymptotic expansion and multidimensional Malliavin weights. Ann. Appl. Probab. 26(2), 818–856 (2016)

    Article  MathSciNet  Google Scholar 

  14. Yamada, T., Yamamoto, K.: A second order discretization using Malliavin weight and Quasi-Monte Carlo method for option pricing, Quantative Finance, published online (2018)

  15. Yamada, T., Yamamoto, K.: A second-order weak approximation of SDEs using a Markov chain without lévy area simulation. Monte Carlo Methods and Applications 24(4), 289–308 (2018)

    Article  MathSciNet  Google Scholar 

  16. Yamada, T., Yamamoto, K.: Second order discretization of Bismut-Elworthy-Li formula: application to sensitivity analysis. SIAM/ASA Journal on Uncertainty Quantification 7(1), 143–173 (2019)

    Article  MathSciNet  Google Scholar 

  17. Yamada, T.: An arbitrary order weak approximation of SDE and Malliavin Monte Carlo: analysis of probability distribution functions. SIAM J. Numer. Anal. 57(2), 563–591 (2019)

    Article  MathSciNet  Google Scholar 

Download references

Funding

This work is supported by JSPS KAKENHI (Grant Number 19K13736), MEXT, Japan.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Toshihiro Yamada.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The views expressed in this paper are those of the author, and do not necessarily reflect those of Mizuho-DL Financial Technology Co., Ltd.

Appendices

Appendix 1.: Useful lemmas

The following lemmas play an important role in the construction of our approximation.

Lemma 2

Let \(\ell \in \mathbb {N}\) and ij ∈{1,⋯ ,d}, j = 1,⋯ ,, we define the polynomial of length \(\ell \in \mathbb {N}\) given by

$$ \begin{array}{@{}rcl@{}} \mathbb{W}_{(i_{1},\cdots,i_{\ell})}(t)=\mathbb{W}_{(i_{1},\cdots,i_{\ell-1})}(t) W^{i_{\ell}}_{t}-{{\int}_{0}^{t}} D_{i_{\ell},s} \mathbb{W}_{(i_{1},\cdots,i_{\ell-1})}(t) ds, t\geq 0, \end{array} $$
(A.1)

where D,⋅ is the Malliavin derivative for the Brownian motion W. Then, for all \(g\in {C}_{b}^{\infty }(\mathbb {R}^{N})\), we have

$$ \begin{array}{@{}rcl@{}} E[ g(\bar{X}_{t}^{x}) I_{(i_{1},\cdots,i_{\ell})}(t)]=E[ g(\bar{X}_{t}^{x}) \mathbb{W}_{(i_{1},\cdots,i_{\ell})}(t)]. \end{array} $$
(A.2)

Proof

Apply the computation of Proposition 3.1 of Naito and Yamada [8] (2019) iteratively. □

Lemma 3

Let ij ∈{0, 1,…,d}, j = 1, 2,…,k, k ≥ 3. Let \({\Delta }=\{ (s_{1},\cdots ,s_{r}) \in \mathbb {R}^{r} ; 0 \leq s_{1}<s_{2} <{\cdots } < s_{r} \leq T \}\) and h be a L2(Δ)-valued Wiener functional such that s1h(s1,…,sr) is adapted for fixed s1 < s2 < … < sr. Then, there are C > 0, \(L \in \mathbb {N} \cup \{0\}\) such that

$$ \begin{array}{@{}rcl@{}} \sup_{x \in \mathbb{R}^{N}}\left|E [g(\bar{X}_{t}^{x}){\int}_{0<t_{1}<\cdots<t_{k}<t}h(t_{1},\cdots,t_{k})dW_{t_{1}}^{i_{1}}{\cdots} dW^{i_{k}}_{t_{k}}]\right|\leq C\sum\limits_{\ell=0}^{L} \|\nabla^{\ell} g\| t^{3}, \end{array} $$

for t ≥ 0 and \(g\in {C}_{b}^{\infty }(\mathbb {R}^{N})\).

Proof

Apply the duality formula with Lemma 1 of Yamada and Yamamoto [14] (2018). □

Appendix 2.: Proof of Proposition 2

First, we expand \(\varphi ({X_{t}^{x}})\) around \(\varphi (\bar {X}_{t}^{x})\) using the following stochastic Taylor expansion of \({X_{t}^{x}}\):

$$ \begin{array}{@{}rcl@{}} X_{t}^{x,\ell}&=&\bar{X}_{t}^{x,\ell}+\sum\limits_{i_{1},i_{2}=0}^{d}L_{i_{1}}V_{i_{2}}^{\ell}(x)I_{(i_{1},i_{2})}(t)+\sum\limits_{i_{1},i_{2},i_{3}=0}^{d}L_{i_{1}}L_{i_{2}}V_{i_{3}}^{\ell}(x)I_{(i_{1},i_{2},i_{3})}(t)\\ &&+\mathcal{R}_{\ell}(t,x) \end{array} $$

with \(\textstyle {\mathcal {R}_{\ell }(t,x)={\sum }_{i_{1},i_{2},i_{3},i_{4}=0}^{d}{{\int \limits }_{0}^{t}}{\int \limits }_{0}^{t_{4}}{\int \limits }_{0}^{t_{3}}{\int \limits }_{0}^{t_{2}}L_{i_{1}}L_{i_{2}}L_{i_{3}}V_{i_{4}}^{\ell }(X_{t_{1}}^{x})dW_{t_{1}}^{i_{1}}dW_{t_{2}}^{i_{2}}dW_{t_{3}}^{i_{3}}dW_{t_{4}}^{i_{4}}}\), as follows:

$$ \begin{array}{@{}rcl@{}} \varphi({X_{t}^{x}})&=&\varphi(\bar{X}_{t}^{x})+\sum\limits_{\ell=1}^{N} \partial_{\ell}\varphi(\bar{X}_{t}^{x})\{X_{t}^{x,\ell}-\bar{X}_{t}^{x,\ell}\}\\ &&+\frac{1}{2}\sum\limits_{\ell_{1},\ell_{2}=1}^{N} \partial_{\ell_{1}}\partial_{\ell_{2}}\varphi(\bar{X}_{t}^{x}){\prod}_{k=1}^{2}\{X_{t}^{x,\ell_{k}}-\bar{X}_{t}^{x,\ell_{k}}\}+r_{\varphi}(t,x)\\ &=&\varphi(\bar{X}_{t}^{x})+\sum\limits_{\ell=1}^{N} \partial_{\ell}\varphi(\bar{X}_{t}^{x})\{\sum\limits_{i_{1},i_{2}=0}^{d}L_{i_{1}}V_{i_{2}}^{\ell}(x)I_{(i_{1},i_{2})}(t)\\ &&+\sum\limits_{i_{1},i_{2},i_{3}=0}^{d}L_{i_{1}}L_{i_{2}}V_{i_{3}}^{\ell}(x)I_{(i_{1},i_{2},i_{3})}(t)+\mathcal{R}_{\ell}(t,x)\}\\ &&+\frac{1}{2}\sum\limits_{\ell_{1},\ell_{2}=1}^{N} \partial_{\ell_{1}}\partial_{\ell_{2}}\varphi(\bar{X}_{t}^{x}){\prod}_{k=1}^{2}\{\sum\limits_{i_{1},i_{2}=0}^{d}L_{i_{1}}V_{i_{2}}^{\ell_{k}}(x)I_{(i_{1},i_{2})}(t)\\ &&+\sum\limits_{i_{1},i_{2},i_{3}=0}^{d}L_{i_{1}}L_{i_{2}}V_{i_{3}}^{\ell_{k}}(x)I_{(i_{1},i_{2},i_{3})}(t)+\mathcal{R}_{\ell_{k}}(t,x)\}\\ &&+r_{\varphi}(t,x), \end{array} $$

for a Wiener functional \(r_{\varphi }: [0,T] \times \mathbb {R} \times {\Omega } \rightarrow \mathbb {R}\) such that \(\| r_{\varphi }(t,x) \|_{p} \leq C \| \nabla ^{3} \varphi \|_{\infty } t^{3}\). We also expand \(J_{t}^{(i,j)}\), 1 ≤ i,jN as

$$ \begin{array}{@{}rcl@{}} J_{t}^{(i,j)}={\delta_{j}^{i}}+\sum\limits_{\ell=0}^{d}\partial_{i}V_{\ell}^{j}(x)I_{(\ell)}(t)+\sum\limits_{\ell_{1},\ell_{2}=0}^{d}\partial_{i}L_{\ell_{1}}V_{\ell_{2}}^{j}(x)I_{(\ell_{1},\ell_{2})}(t)+\mathcal{R}^{J}_{i,j}(t,x), \end{array} $$

where \(\textstyle {\mathcal {R}^{J}_{i,j}(t,x)={\sum }_{\ell _{1},\ell _{2},\ell _{3}=0}^{d}{{\int \limits }_{0}^{t}}{\int \limits }_{0}^{t_{3}}{\int \limits }_{0}^{t_{2}}\partial _{i}L_{\ell _{1}}L_{\ell _{2}}V_{\ell _{3}}^{j}(X_{t_{1}}^{x})dW_{t_{1}}^{\ell _{1}}dW_{t_{2}}^{\ell _{2}}dW_{t_{3}}^{\ell _{3}}}\).

Then, the expectation \(E[\varphi ({X_{t}^{x}})J_{t}^{(i,j)}]\) is expanded as follows:

$$ \begin{array}{@{}rcl@{}} &&E[\varphi({X_{t}^{x}})J_{t}^{(i,j)}]\\ &=&E[\varphi(\bar{X}_{t}^{x})\{{\delta_{j}^{i}}+\sum\limits_{k=0}^{d} \partial_{i} {V_{k}^{j}}(x)I_{(k)}(t)+ \sum\limits_{k_{1},k_{2}=0}^{d}\partial_{i} L_{k_{1}}V_{k_{2}}^{j}(x) I_{(k_{1},k_{2})}(t)+\mathcal{R}^{J}_{i,j}(t,x)\}]\\ &&+\sum\limits_{\ell=1}^{N}E[\partial_{\ell}\varphi(\bar{X}_{t}^{x})\sum\limits_{i_{1},i_{2}=0}^{d}L_{i_{1}}V_{i_{2}}^{\ell}(x)I_{(i_{1},i_{2})}(t)\times \{{\delta_{j}^{i}}+\sum\limits_{k=0}^{d}\partial_{i}{V_{k}^{j}}(x)I_{(k)}(t)\\ &&+\sum\limits_{k_{1},k_{2}=0}^{d}\partial_{i}L_{k_{1}}V_{k_{2}}^{j}(x) I_{(k_{1},k_{2})}(t)+\mathcal{R}^{J}_{i,j}(t,x)\}] \\ &&+\frac{1}{2}\sum\limits_{\ell_{1},\ell_{2}=1}^{N}E[\partial_{\ell_{1}}\partial_{\ell_{2}}\varphi(\bar{X}_{t}^{x}){\prod}_{k=1}^{2}\{\sum\limits_{i_{1},i_{2}=0}^{d}L_{i_{1}}V_{i_{2}}^{\ell_{k}}(x)I_{(i_{1},i_{2})}(t)\\ &&+\sum\limits_{i_{1},i_{2},i_{3}=0}^{d}L_{i_{1}}L_{i_{2}}V_{i_{3}}^{\ell_{k}}(x)I_{(i_{1},i_{2},i_{3})}(t)\\ &&+\mathcal{R}_{\ell_{k}}(t,x) \} \{{\delta_{j}^{i}}+\sum\limits_{\ell=0}^{d}\partial_{i}V_{\ell}^{j}(x)I_{(\ell)}(t)\\ &&+\sum\limits_{\ell_{1},\ell_{2}=0}^{d}\partial_{i}L_{\ell_{1}}V_{\ell_{2}}^{j}(x)I_{(\ell_{1},\ell_{2})}(t)+\mathcal{R}^{J}_{i,j}(t,x)\}]\\ &&+{\mathcal E}_{\varphi}(t,x), \end{array} $$
(B.1)

for a residual \({\mathcal E}_{\varphi }(t,x)\). Here, \({\mathcal E}_{\varphi }\) satisfies

$$ \begin{array}{@{}rcl@{}} \sup_{x \in \mathbb{R}^{N}} | {\mathcal E}_{\varphi}(t,x) | \leq C \| \nabla^{3} \varphi \|_{\infty} t^{3}, \end{array} $$

which is immediately obtained by standard moment estimates of iterated stochastic integrals.

We now estimate the expectations in (B.1) which will be the order O(t3) using Lemma 3. Note that the products of iterated stochastic integrals appearing in (B.1) can be written as follows:

$$ \begin{array}{@{}rcl@{}} I_{(k_{1})}(t)I_{(k_{2},k_{3})}(t)&=&I_{(k_{1},k_{2},k_{3})}(t)+I_{(k_{2},k_{1},k_{3})}(t)+I_{(k_{2},k_{3},k_{1})}(t)\\ &&+I_{(0,k_{3})}(t)\textbf{1}_{k_{1}=k_{2}\neq0}+I_{(k_{2},0)}(t)\textbf{1}_{k_{1}=k_{3}\neq0}, \end{array} $$
(B.2)
$$ \begin{array}{@{}rcl@{}} I_{(k_{1},k_{2})}(t)I_{(k_{3},k_{4})}(t) &=&I_{(k_{1},k_{2},k_{3},k_{4})}(t)+I_{(k_{1},k_{3},k_{2},k_{4})}(t)+I_{(k_{1},k_{3},k_{4},k_{2})}(t)\\ &&+I_{(k_{3},k_{4},k_{1},k_{2})}(t)+I_{(k_{3},k_{1},k_{4},k_{2})}(t)+I_{(k_{3},k_{1},k_{2},k_{4})}(t)\\ &&+I_{(k_{1},0,k_{4})}(t)\textbf{1}_{k_{2}=k_{3}\neq0}+I_{(k_{1},k_{3},0)}(t)\textbf{1}_{k_{2}=k_{4}\neq0}\\ &&+I_{(0,k_{4},k_{2})}(t)\textbf{1}_{k_{1}=k_{3}\neq0}+I_{(0,k_{2},k_{4})}(t)\textbf{1}_{k_{1}=k_{3}\neq0}\\ &&+I_{(k_{3},0,k_{2})}(t)\textbf{1}_{k_{1}=k_{4}\neq0}+I_{(k_{3},k_{1},0)}(t)\textbf{1}_{k_{2}=k_{4}\neq0}\\ &&+I_{(0,0)}\textbf{1}_{k_{1}=k_{3}\neq0,k_{2}=k_{4}\neq0}, \end{array} $$
(B.3)
$$ \begin{array}{@{}rcl@{}} &&I_{(k_{1},k_{2})}(t)I_{(k_{3},k_{4},k_{5})}(t)\\ &=&I_{(k_{1},k_{2},k_{3},k_{4},k_{5})}(t)+I_{(k_{1},k_{3},k_{2},k_{4},k_{5})}(t)+I_{(k_{1},k_{3},k_{4},k_{2},k_{5})}(t)+I_{(k_{1},k_{3},k_{4},k_{5},k_{2})}(t)\\ &&+I_{(k_{3},k_{4},k_{5},k_{1},k_{2})}(t)+I_{(k_{3},k_{4},k_{1},k_{5},k_{2})}(t)+I_{(k_{3},k_{4},k_{1},k_{2},k_{5})}(t)\\ &&+I_{(k_{3},k_{1},k_{4},k_{5},k_{2})}(t)+I_{(k_{3},k_{1},k_{4},k_{2},k_{5})}(t)+I_{(k_{3},k_{1},k_{2},k_{4},k_{5})}(t)\\ &&+I_{(k_{1},0,k_{4},k_{5})}(t)\textbf{1}_{k_{2}=k_{3}\neq0}+I_{(0,k_{2},k_{4},k_{5})}(t)\textbf{1}_{k_{1}=k_{3}\neq0}+I_{(k_{1},k_{3},0,k_{5})}(t)\textbf{1}_{k_{2}=k_{4}\neq0}\\ &&+I_{(k_{1},k_{3},k_{4},0)}(t)\textbf{1}_{k_{2}=k_{5}\neq0}+I_{(0,k_{4},k_{2},k_{5})}(t)\textbf{1}_{k_{1}=k_{3}\neq0}+I_{(0,k_{4},k_{5},k_{2})}(t)\textbf{1}_{k_{1}=k_{3}\neq0}\\ &&+I_{(k_{3},k_{4},0,k_{2})}(t)\textbf{1}_{k_{1}=k_{5}\neq0}+I_{(k_{3},0,k_{5},k_{2})}(t)\textbf{1}_{k_{1}=k_{4}\neq0}+I_{(k_{3},k_{4},k_{1},0)}(t)\textbf{1}_{k_{2}=k_{5}\neq0}\\ &&+I_{(k_{3},0,k_{2},k_{5})}(t)\textbf{1}_{k_{1}=k_{4}\neq0}+I_{(k_{3},k_{1},0,k_{5})}(t)\textbf{1}_{k_{2}=k_{4}\neq0}+I_{(k_{3},k_{1},k_{4},0)}(t)\textbf{1}_{k_{2}=k_{5}\neq0}\\ &&+I_{(0,0,k_{5})}\textbf{1}_{k_{1}=k_{3}\neq0,k_{2}=k_{4}\neq0} + I_{(0,k_{4},0)}\textbf{1}_{k_{1}=k_{3}\neq0,k_{2}=k_{5}\neq0} + I_{(k_{3},0,0)}\textbf{1}_{k_{1}=k_{4}\neq0,k_{2}=k_{5}\neq0}.\\ && \end{array} $$
(B.4)

By Lemma 3, the expectations involving single iterated stochastic integral of the length greater than k ≥ 3 will be O(t3). Then, we obtain

$$ \begin{array}{@{}rcl@{}} E[\varphi(\bar{X}_{t}^{x}) J_{t}^{(i,j)} ]&=&E[\varphi(\bar{X}_{t}^{x})\{{\delta_{j}^{i}}+\sum\limits_{k=0}^{d} \partial_{i} {V_{k}^{j}}(x){W_{t}^{k}}\\ &&+ \sum\limits_{k_{1},k_{2}=0}^{d}\partial_{i} L_{k_{1}}V_{k_{2}}^{j}(x) I_{(k_{1},k_{2})}(t)\}]\\ &&+\sum\limits_{\ell=1}^{N}E [\partial_{\ell}\varphi(\bar{X}_{t}^{x})\sum\limits_{k_{1},k_{2}=0}^{d}L_{k_{1}}V_{k_{2}}^{\ell}(x)I_{(k_{1},k_{2})}(t){\delta_{j}^{i}}]\\ &&+\sum\limits_{\ell=1}^{N}E[\partial_{\ell}\varphi(\bar{X}_{t}^{x})\sum\limits_{k,k_{1},k_{2}=0}^{d}L_{k_{1}}V_{k_{2}}^{\ell}(x)\partial_{i}{V_{k}^{j}}(x)\{ I_{(k_{1},0)}(t)\textbf{1}_{k=k_{2}\neq0} \\ &&+I_{(0,k_{2})}(t) \textbf{1}_{k=k_{1}\neq0} \}]\\ &&+\sum\limits_{\ell=1}^{N}E[\partial_{\ell}\varphi(\bar{X}_{t}^{x})\sum\limits_{k_{1},k_{2}=1}^{d}L_{k_{1}}V_{k_{2}}^{\ell}(x)\partial_{i}L_{k_{1}}V_{k_{2}}^{j}(x)I_{(0,0)}(t)]\\ &&+\frac{1}{2}\sum\limits_{\ell_{1},\ell_{2}=1}^{N}E[\partial_{\ell_{1}}\partial_{\ell_{2}}\varphi(\bar{X}_{t}^{x})\!\sum\limits_{i_{1},i_{2}=1}^{d}L_{i_{1}}V_{i_{2}}^{\ell_{1}}(x)L_{i_{1}}V_{i_{2}}^{\ell_{2}}(x) I_{(0,0)}(t){\delta_{j}^{i}}]\\ &&+R_{\varphi}(t,x), \end{array} $$
(B.5)

where Rφ(t,x) satisfies

$$ \begin{array}{@{}rcl@{}} \sup_{x \in \mathbb{R}^{N}} | R_{\varphi}(t,x) | \leq C \| \nabla^{4} \varphi \|_{\infty} t^{3}. \end{array} $$

Appendix 3.: Proof of Proposition 3

We immediately have

$$ \begin{array}{@{}rcl@{}} E[\varphi(\mathbb{X}_{t}^{x})\mathbb{J}_{t}^{(i,j)}]&=&E[\varphi(\bar{X}_{t}^{x})\mathbb{J}_{t}^{(i,j)}]+\sum\limits_{\ell=1}^{N}E[\partial_{\ell}\varphi\left( \bar{X}_{t}^{x}\right)\{\mathbb{X}_{t}^{x,\ell}-\bar{X}_{t}^{x,\ell}\}\mathbb{J}_{t}^{(i,j)}]\\ && +\frac{1}{2}\sum\limits_{\ell_{1},\ell_{2}=1}^{N}E[\partial_{\ell_{1}}\partial_{\ell_{2}}\varphi\left( \bar{X}_{t}^{x}\right){\prod}_{m=1}^{2}\{\mathbb{X}_{t}^{x,\ell_{m}}-\bar{X}_{t}^{x,\ell_{m}}\}\mathbb{J}_{t}^{(i,j)}]\\ &&+O(t^{3}). \end{array} $$
(C.1)

Here, the residual order O(t3) is immediately obtained by the moment estimate of iterated stochastic integrals. We next apply Lemma 3 in order to get the following:

$$ \begin{array}{@{}rcl@{}} &&E[\varphi(\mathbb{X}_{t}^{x})\mathbb{J}_{t}^{(i,j)}]\\ &=&E[\varphi(\bar{X}_{t}^{x})\{{\delta_{j}^{i}}+\sum\limits_{k=0}^{d}\partial_{i}{V_{k}^{j}}(x){W_{t}^{k}}+\sum\limits_{k_{1},k_{2}=0}^{d}\partial_{i}L_{k_{1}}V_{k_{2}}^{j}(x)\\ &&\times\frac{1}{2}\{W_{t}^{k_{1}}W_{t}^{k_{2}}-t\textbf{1}_{k_{1}=k_{2}\neq0}\}\}]\\ &&+\sum\limits_{\ell=1}^{N}E[\partial_{\ell}\varphi (\bar{X}_{t}^{x}) [\sum\limits_{k_{1},k_{2}=0}^{d}L_{k_{1}}V_{k_{2}}^{\ell}(x)\frac{1}{2}\{W_{t}^{k_{1}}W_{t}^{k_{2}}-t\textbf{1}_{k_{1}=k_{2}\neq0}\}{\delta_{j}^{i}} \end{array} $$
(C.2)
$$ \begin{array}{@{}rcl@{}} &&+\sum\limits_{k,k_{1},k_{2}=0}^{d}L_{k_{1}}V_{k_{2}}^{\ell}(x)\partial_{i}{V_{k}^{j}}(x){W_{t}^{k}}\frac{1}{2}\{W_{t}^{k_{1}}W_{t}^{k_{2}}-t\textbf{1}_{k_{1}=k_{2}\neq0}\} \end{array} $$
(C.3)
$$ \begin{array}{@{}rcl@{}} &&+\sum\limits_{k_{1},k_{2}=1}^{d}\{ L_{k_{1}}V_{k_{2}}^{\ell}(x)\partial_{i}L_{k_{1}}V_{k_{2}}^{j}(x)+L_{k_{1}}V_{k_{2}}^{\ell}(x)\partial_{i}L_{k_{2}}V_{k_{1}}^{j}(x)\}\frac{1}{4}t^{2} ]\\ &&+\frac{1}{2}\sum\limits_{\ell_{1},\ell_{2}=1}^{N}E[\partial_{\ell_{1}}\partial_{\ell_{2}}\varphi(\bar{X}_{t}^{x})[\sum\limits_{k_{1},k_{2}=1}^{d}\{L_{k_{1}}V_{k_{2}}^{\ell_{1}}(x)L_{k_{1}}V_{k_{2}}^{\ell_{2}}(x)\\ &&+L_{k_{1}}V_{k_{2}}^{\ell_{1}}(x)L_{k_{2}}V_{k_{1}}^{\ell_{2}}(x) \}\frac{1}{4}t^{2}{\delta_{j}^{i}}]]\\ &&+{O}(t^{3}). \end{array} $$
(C.4)

Appendix 4.: Proof of Proposition 4

By applying Lemma 2 and Lemma 3, we have

$$ \begin{array}{@{}rcl@{}} &&\sum\limits_{\ell=1}^{N}E[\partial_{\ell}\varphi(\bar{X}_{t}^{x})\sum\limits_{k,k_{1},k_{2}=0}^{d}L_{k_{1}}V_{k_{2}}^{\ell}(x)\partial_{i}{V_{k}^{j}}(x)\frac{1}{2}\{{W_{t}^{k}}W_{t}^{k_{1}}W_{t}^{k_{2}}-t{W_{t}^{k}}{\textbf{1}}_{k_{1}=k_{2}\neq0}\}]\\ &=&\sum\limits_{\ell=1}^{N}E[\partial_{\ell}\varphi(\bar{X}_{t}^{x})\sum\limits_{k,k_{1},k_{2}=0}^{d}L_{k_{1}}V_{k_{2}}^{\ell}(x)\partial_{i}{V_{k}^{j}}(x) \frac{1}{2}\{{W_{t}^{k}}W_{t}^{k_{1}}W_{t}^{k_{2}}\\ &&-(t{W_{t}^{k}}{\textbf{1}}_{k_{1}=k_{2}\neq0}+tW_{t}^{k_{1}}{\textbf{1}}_{k=k_{2}\neq0}+tW_{t}^{k_{2}}{\textbf{1}}_{k=k_{1}\neq0} )+(tW_{t}^{k_{1}}{\textbf{1}}_{k=k_{2}\neq0}+tW_{t}^{k_{2}}{\textbf{1}}_{k=k_{1}\neq0} )\}]\\ &=&\sum\limits_{\ell=1}^{N}E[\partial_{\ell}\varphi(\bar{X}_{t}^{x})\sum\limits_{k,k_{1},k_{2}=0}^{d}L_{k_{1}}V_{k_{2}}^{\ell}(x)\partial_{i}{V_{k}^{j}}(x)\\ &&\times\frac{1}{2}\{6{\int}_{0<t_{1}<\cdots<t_{3}<t}dW_{t_{1}}^{k}dW_{t_{2}}^{k_{1}}dW_{t_{3}}^{k_{2}}+(tW_{t}^{k_{1}}{\textbf{1}}_{k=k_{2}\neq0}+tW_{t}^{k_{2}}{\textbf{1}}_{k=k_{1}\neq0} )\}]\\ &=&\sum\limits_{\ell=1}^{N}E[\partial_{\ell}\varphi(\bar{X}_{t}^{x})\sum\limits_{k,k_{1},k_{2}=0}^{d}L_{k_{1}}V_{k_{2}}^{\ell}(x)\partial_{i}{V_{k}^{j}}(x)\frac{1}{2}\{tW_{t}^{k_{1}}\textbf{1}_{k=k_{2}\neq0}+tW_{t}^{k_{2}}\textbf{1}_{k=k_{1}\neq0}\}]+{O}(t^{3}). \end{array} $$

Appendix 5.: Proof of Proposition 5

We immediately have

$$ \begin{array}{@{}rcl@{}} &&E[\varphi (\widetilde{X}_{t}^{x} )\widetilde{J}_{t}^{(i,j)}]\\ &=&E[\varphi (\bar{X}_{t}^{x} )\{{\delta_{j}^{i}}+\sum\limits_{k=0}^{d}\partial_{i}{V_{k}^{j}}(x){W_{t}^{k}}+\sum\limits_{k_{1},k_{2}=0}^{d}\partial_{i}L_{k_{1}}V_{k_{2}}^{j}(x)\frac{1}{2} \{W_{t}^{k_{1}}W_{t}^{k_{2}}-t\textbf{1}_{k_{1}=k_{2}\neq0}\} \} ]\\ &&+\sum\limits_{\ell=1}^{N} E[\varphi (\bar{X}_{t}^{x} )\sum\limits_{\ell=1}^{N}\sum\limits_{k_{1},k_{2}=1}^{d}\frac{1}{4}L_{k_{1}}V^{\ell}_{k_{2}}(x)\{\partial_{i}L_{k_{1}}V_{k_{2}}^{j}(x)-\partial_{i}L_{k_{2}}V_{k_{1}}^{j}(x)\} H_{(\ell)}(\bar{X}_{t}^{x},1) ]t^{2} \end{array} $$
(E.1)
$$ \begin{array}{@{}rcl@{}} &&+\sum\limits_{\ell=1}^{N} E[\partial_{\ell}\varphi(\bar{X}_{t}^{x})\sum\limits_{k_{1},k_{2}=0}^{d}L_{k_{1}}V_{k_{2}}^{\ell}(x)\frac{1}{2}\{W_{t}^{k_{1}}W_{t}^{k_{2}}-t \textbf{1}_{k_{1}=k_{2}\neq0}\}]]{\delta_{j}^{i}}\\ &&+\sum\limits_{\ell=1}^{N} E[\partial_{\ell}\varphi (\bar{X}_{t}^{x} )\sum\limits_{k,k_{1},k_{2}=0}^{d} L_{k_{1}}V_{k_{2}}^{\ell}(x)\partial_{i}{V_{k}^{j}}(x) \frac{1}{2}t \{ W_{t}^{k_{1}} \textbf{1}_{k=k_{2}\neq 0}+W_{t}^{k_{2}} \textbf{1}_{k=k_{1}\neq 0}\}]\\ &&+\sum\limits_{\ell=1}^{N} E[\partial_{\ell}\varphi(\bar{X}_{t}^{x})\sum\limits_{k_{1},k_{2}=1}^{d}\{ L_{k_{1}}V_{k_{2}}^{\ell}(x)\partial_{i}L_{k_{1}}V_{k_{2}}^{j}(x)+L_{k_{1}}V_{k_{2}}^{\ell}(x)\partial_{i}L_{k_{2}}V_{k_{1}}^{j}(x) \} \frac{1}{4}t^{2} ] \\ &&+\frac{1}{2}\sum\limits_{\ell_{1},\ell_{2}=1}^{N}E[\partial_{\ell_{1}}\partial_{\ell_{2}}\varphi(\bar{X}_{t}^{x})[\sum\limits_{k_{1},k_{2}=1}^{d}\{ L_{k_{1}}V_{k_{2}}^{\ell_{1}}(x)L_{k_{1}}V_{k_{2}}^{\ell_{2}}(x)+L_{k_{1}}V_{k_{2}}^{\ell_{1}}(x)L_{k_{2}}V_{k_{1}}^{\ell_{2}}(x)\} \frac{1}{4}t^{2}{\delta_{j}^{i}}]]\\ &&+\sum\limits_{j=1}^{N} E[\partial_{\ell_{1}}\varphi(\bar{X}_{t}^{x}) \sum\limits_{\ell_{2}=1}^{N}\sum\limits_{k_{1},k_{2}=1}^{d}\frac{1}{8}L_{k_{1}}V^{\ell_{1}}_{k_{2}}(x)\{L_{k_{1}}V_{k_{2}}^{\ell_{2}}(x)-L_{k_{2}}V_{k_{1}}^{\ell_{2}}(x)\} H_{(\ell_{2})}(\bar{X}_{t}^{x},1){\delta_{j}^{i}}] t^{2}\\ &&+O(t^{3}). \end{array} $$
(E.2)

Using the following identities

$$ \begin{array}{@{}rcl@{}} E[\varphi (\bar{X}_{t}^{x} ) H_{(\ell)}(\bar{X}_{t}^{x},1) ]&=&E[\partial_{\ell} \varphi (\bar{X}_{t}^{x} ) ], E[\partial_{\ell_{1}}\varphi (\bar{X}_{t}^{x} ) H_{(\ell_{2})}(\bar{X}_{t}^{x},1) ]\\ &=&E[\partial_{\ell_{1}}\partial_{\ell_{2}} \varphi (\bar{X}_{t}^{x} ) ] \end{array} $$

and linearity of expectation, we have

$$ \begin{array}{@{}rcl@{}} E[\varphi (\widetilde{X}_{t}^{x} )\widetilde{J}_{t}^{(i,j)}] &=&E[\varphi (\bar{X}_{t}^{x})\{{\delta_{j}^{i}}+\sum\limits_{k=0}^{d}\partial_{i}{V_{k}^{j}}(x){W_{t}^{k}}+\sum\limits_{k_{1},k_{2}=0}^{d}\partial_{i}L_{k_{1}}V_{k_{2}}^{j}(x)\\ &&\times \frac{1}{2} \{W_{t}^{k_{1}}W_{t}^{k_{2}}-t\textbf{1}_{k_{1}=k_{2}\neq 0}\}\}]\\ &&+\sum\limits_{\ell=1}^{N}E[\partial_{\ell}\varphi(\bar{X}_{t}^{x})\sum\limits_{k_{1},k_{2}=0}^{d}L_{k_{1}}V_{k_{2}}^{\ell}(x)\frac{1}{2}\{W_{t}^{k_{1}}W_{t}^{k_{2}}-t \textbf{1}_{k_{1}=k_{2}\neq 0}\}]{\delta_{j}^{i}}\\ &&+\sum\limits_{\ell=1}^{N} E[\partial_{\ell}\varphi (\bar{X}_{t}^{x} )\sum\limits_{k,k_{1},k_{2}=0}^{d} L_{k_{1}}V_{k_{2}}^{\ell}(x)\partial_{i}{V_{k}^{j}}(x)\\ &&\times\frac{1}{2}t \{ W_{t}^{k_{1}} \textbf{1}_{k=k_{2}\neq 0}+W_{t}^{k_{2}} \textbf{1}_{k=k_{1}\neq 0}\}]\\ &&+\sum\limits_{\ell=1}^{N}E[\partial_{\ell}\varphi (\bar{X}_{t}^{x} )\sum\limits_{k_{1},k_{2}=1}^{d}L_{k_{1}}V_{k_{2}}^{\ell}(x)\partial_{i}L_{k_{1}}V_{k_{2}}^{j}(x) \frac{1}{2}t^{2} ] \\ &&+\frac{1}{4}\sum\limits_{\ell_{1},\ell_{2}=1}^{N} E[\partial_{\ell_{1}}\partial_{\ell_{2}}\varphi (\bar{X}_{t}^{x} )\sum\limits_{k_{1},k_{2}=1}^{d}L_{k_{1}}V_{k_{2}}^{\ell_{1}}(x)L_{k_{1}}V_{k_{2}}^{\ell_{2}}(x) t^{2} ]{\delta_{j}^{i}} \\ &&+O(t^{3}). \end{array} $$

Appendix 6.: Proof of Theorem 3

Since we have

$$ \begin{array}{@{}rcl@{}} \frac{\partial}{\partial x_{i}}E[ f({X_{t}^{x}}) ]= \sum\limits_{k=1}^{N} E[ g_{k}({X_{t}^{x}}) J_{t}^{(i,k)} ], \end{array} $$
(F.1)

we aim to expand \(E[ g_{k}({X_{t}^{x}}) J_{t}^{i,k} ]\) using distribution theory on the Wiener space in order to get the error bound with \(\| g \|_{\infty }\). Let us consider

$$ \begin{array}{@{}rcl@{}} dX_{t}^{x,\varepsilon}=\varepsilon^{2} V_{0}(X_{t}^{x,\varepsilon})dt+\varepsilon \sum\limits_{i=1}^{d} V_{i}(X_{t}^{x,\varepsilon})d{W_{t}^{i}}, X_{0}^{x,\varepsilon}=x, \end{array} $$
(F.2)

for ε ∈ (0, 1]. We have the stochastic Taylor expansions:

$$ \begin{array}{@{}rcl@{}} X_{t}^{x,k,\varepsilon}&=&x_{k} +\varepsilon^{2} {V^{k}_{0}}(x)t+ \varepsilon \sum\limits_{i=1}^{d} {V^{k}_{i}}(x) {W_{t}^{i}}\\ &&+\varepsilon^{2} \sum\limits_{i_{1},i_{2}=1}^{d} L_{i_{1}}V^{k}_{i_{2}}(x) {{\int}_{0}^{t}}{\int}_{0}^{t_{2}} dW_{t_{1}}^{i_{1}} dW_{t_{2}}^{i_{2}}+R^{k,\varepsilon}(t,x),\\ J_{t}^{(i,k),\varepsilon}&=&{\delta_{k}^{i}} + \varepsilon \sum\limits_{j=1}^{d} \partial_{i}{V^{k}_{j}}(x) {W_{t}^{j}}+\varepsilon^{2} \partial_{i}{V^{k}_{0}}(x)t\\ &&+\varepsilon^{2} \sum\limits_{j_{1},j_{2}=1}^{d} \partial_{i} L_{j_{1}}V^{k}_{j_{2}}(x) {{\int}_{0}^{t}}{\int}_{0}^{t_{2}} dW_{t_{1}}^{j_{1}} dW_{t_{2}}^{j_{2}}+E^{(i,k),\varepsilon}(t,x) \end{array} $$

with the remainders Rk,ε(t,x) and E(i,k),ε(t,x), respectively. We define \(\textstyle {Y_{t}^{x,\varepsilon }=(X_{t}^{x,\varepsilon }-x-\varepsilon ^{2} V_{0}(x)t)/\varepsilon }\) and then

$$ \begin{array}{@{}rcl@{}} Y_{t}^{x,\varepsilon}=\sum\limits_{i=1}^{d} V_{i}(x) {W_{t}^{i}}+\varepsilon \sum\limits_{i_{1},i_{2}=1}^{d} L_{i_{1}}V_{i_{2}}(x) {{\int}_{0}^{t}}{\int}_{0}^{t_{2}} dW_{t_{1}}^{i_{1}} dW_{t_{2}}^{i_{2}}+r^{\varepsilon}(t,x), \end{array} $$
(F.3)

for a Wiener functional rε(t,x). Let \(Y_{t}^{x,0}:=\sum \limits _{i=1}^{d} V_{i}(x) {W_{t}^{i}}\). Since it holds

$$ \begin{array}{@{}rcl@{}} E[ g_{k}(X_{t}^{x,\varepsilon}) J_{t}^{(i,k),\varepsilon} ]={\int}_{\mathbb{R}^{N}} g_{k}(x+\varepsilon^{2} V_{0}(x)t+\varepsilon y) \langle \delta_{y} (Y_{t}^{x,\varepsilon} ), J_{t}^{(i,k),\varepsilon} \rangle dy, \end{array} $$
(F.4)

we next expand \(\langle \delta _{y} (Y_{t}^{x,\varepsilon } ), J_{t}^{(i,k),\varepsilon } \rangle \). We have the following:

$$ \begin{array}{@{}rcl@{}} \langle \delta_{y} (Y_{t}^{x,\varepsilon} ), J_{t}^{(i,k),\varepsilon} \rangle &=& \langle \delta_{y} (Y_{t}^{x,0} ), 1 \rangle {\delta_{k}^{i}} + \varepsilon \langle \delta_{y} (Y_{t}^{x,0} ), \sum\limits_{j=1}^{d} \partial_{i}{V^{k}_{j}}(x) {W_{t}^{j}} \rangle \\ && + \varepsilon \sum \langle \partial_{\ell} \delta_{y} (Y_{t}^{x,0} ), \sum\limits_{i_{1},i_{2}=1}^{d} L_{i_{1}}V^{\ell}_{i_{2}}(x) {{\int}_{0}^{t}}{\int}_{0}^{t_{2}} dW_{t_{1}}^{i_{1}} dW_{t_{2}}^{i_{2}}\rangle {\delta_{k}^{i}} \\ && + \varepsilon^{2} {{\int}_{0}^{1}} (1-u) \langle \delta_{y} (Y_{t}^{x,\varepsilon u} ), F^{\varepsilon u}(t,x) \rangle du \end{array} $$
(F.5)

where \(F^{\lambda }(t,x) \in \mathbb {D}^{\infty }\) is given by

$$ \begin{array}{@{}rcl@{}} F^{\lambda}(t,x)&=&\sum\limits_{k_{1},k_{2}=1}^{N} H_{(k_{1},k_{2})} \left( Y_{t}^{x,\lambda}, \frac{\partial}{\partial \lambda} Y_{t}^{x,k_{1},\lambda} \frac{\partial}{\partial \lambda} Y_{t}^{x,k_{2},\lambda} J_{t}^{(i,k),\lambda} \right)\\ &&+\sum\limits_{k_{1}=1}^{N} H_{(k_{1})} \left( Y_{t}^{x,\lambda}, \frac{\partial^{2}}{\partial \lambda^{2}} Y_{t}^{x,k_{1},\lambda} J_{t}^{(i,k),\lambda} \right)\\ &&+2\sum\limits_{k_{1}=1}^{N} H_{(k_{1})} \left( Y_{t}^{x,\lambda}, \frac{\partial}{\partial \lambda} Y_{t}^{x,k_{1},\lambda} \frac{\partial}{\partial \lambda} J_{t}^{(i,k),\lambda} \right) + \frac{\partial^{2}}{\partial \lambda^{2}} J_{t}^{(i,k),\lambda} \end{array} $$

for λ ∈ [0, 1], which satisfies that for all k ≥ 1, \(p \in [1,\infty )\) and multi-index α, there is C > 0 such that

$$ \begin{array}{@{}rcl@{}} \sup_{\lambda \in [0,1], x \in \mathbb{R}^{N}}\| F^{\lambda}(t,x) \|_{k,p} \leq C t \end{array} $$
(F.6)

for all t ∈ (0,T]. Here, we used the estimate of Kusuoka and Stroock [7] (1984): for \(\alpha =(\alpha _{1},\cdots ,\alpha _{k}) \in \{1,\cdots ,N \}^{k}\), \(p \in [1,\infty )\), there exist C > 0, \(\ell \in \mathbb {N}\), \(q \in [1,\infty )\) such that

$$ \begin{array}{@{}rcl@{}} \| H_{\alpha}(Y_{t}^{x,\lambda}, G )\|_{p} \leq C t^{-|\alpha|/2} \| G \|_{\ell,q}, \end{array} $$
(F.7)

for t > 0, and the estimates for stochastic integrals: for \(i \in \mathbb {N}\), \(k \in \mathbb {N}\), \(p \in [1,\infty )\), there exists C > 0 such that

$$ \begin{array}{@{}rcl@{}} \left\| \frac{\partial^{i}}{\partial \varepsilon^{i}} Y_{t}^{x,j,\varepsilon} \right\|_{k,p} \leq C t^{(i+1)/2}, \left\| \frac{\partial^{i}}{\partial \varepsilon^{i}} J_{t}^{(j,k),\varepsilon} \right\|_{k,p} \leq C t^{i/2}, \end{array} $$
(F.8)

for t > 0. The third term of the expansion coefficient in (F.5) is obtained as

$$ \begin{array}{@{}rcl@{}} &&\langle \partial_{\ell} \delta_{y} (Y_{t}^{x,0} ), L_{i_{1}}V^{\ell}_{i_{2}}(x) {{\int}_{0}^{t}}{\int}_{0}^{t_{2}} dW_{t_{1}}^{i_{1}} dW_{t_{2}}^{i_{2}}\rangle\\ &=& \langle \partial_{\ell} \delta_{y} (V(x) \cdot ), E[ L_{i_{1}}V^{\ell}_{i_{2}}(x) {{\int}_{0}^{t}}{\int}_{0}^{t_{2}} dW_{t_{1}}^{i_{1}} dW_{t_{2}}^{i_{2}}|W_{t}=\cdot ] \rangle \end{array} $$

with

$$ \begin{array}{@{}rcl@{}} &&E[ L_{i_{1}}V^{\ell}_{i_{2}}(x) {{\int}_{0}^{t}}{\int}_{0}^{t_{2}} dW_{t_{1}}^{i_{1}} dW_{t_{2}}^{i_{2}}|W_{t}=\cdot ]\\ &=& \langle \delta_{\cdot} (W_{t}),L_{i_{1}}V^{\ell}_{i_{2}}(x) {{\int}_{0}^{t}}{\int}_{0}^{t_{2}} dW_{t_{1}}^{i_{1}} dW_{t_{2}}^{i_{2}} \rangle\\ &=& \langle \delta_{\cdot} (W_{t}),L_{i_{1}}V^{\ell}_{i_{2}}(x) \frac{1}{2} \{ W_{t}^{i_{1}}W_{t}^{i_{2}} - t \textbf{1}_{i_{1}=i_{2}} \} \rangle\\ &=&E[ L_{i_{1}}V^{\ell}_{i_{2}}(x) \frac{1}{2} \{ W_{t}^{i_{1}}W_{t}^{i_{2}} - t \textbf{1}_{i_{1}=i_{2}} \}|W_{t} = \cdot ]. \end{array} $$

Then, we have

$$ \begin{array}{@{}rcl@{}} \langle \partial_{\ell} \delta_{y} (Y_{t}^{x,0} ), L_{i_{1}}V^{\ell}_{i_{2}}(x) {{\int}_{0}^{t}}{\int}_{0}^{t_{2}} dW_{t_{1}}^{i_{1}} dW_{t_{2}}^{i_{2}}\rangle =_{{\mathcal S}^{\prime}} \langle \partial_{\ell} \delta_{y} (V(x) \cdot ), E[ L_{i_{1}}V^{\ell}_{i_{2}}(x) \frac{1}{2} \{ W_{t}^{i_{1}}W_{t}^{i_{2}} -t \textbf{1}_{i_{1}=i_{2}} \}|W_{t}=\cdot]p^{W_{t}}(\cdot) \rangle_{{\mathcal S}} = \langle \partial_{\ell} \delta_{y} (Y_{t}^{x,0} ), L_{i_{1}}V^{\ell}_{i_{2}}(x) \frac{1}{2} \{ W_{t}^{i_{1}}W_{t}^{i_{2}} -t \textbf{1}_{i_{1}=i_{2}} \} \rangle\\ = \langle \delta_{y} (Y_{t}^{x,0} ), \sum\limits_{e=1}^{N} \sum\limits_{j_{1},j_{2},j_{3}=1}^{d} L_{j_{1}}V_{j_{2}}^{\ell}(x)V_{j_{3}}^{e}(x)A_{\ell e}(x)\\ \frac{1}{2t}\{ W^{j_{1}}_{t}W^{j_{2}}_{t}W^{j_{3}}_{t} -W^{j_{1}}_{t} t \textbf{1}_{j_{2}=j_{3}} -W^{j_{2}}_{t} t \textbf{1}_{j_{1}=j_{3}} -W^{j_{3}}_{t} t \textbf{1}_{j_{1}=j_{2}} \} \rangle. \end{array} $$

Here, \(p^{W_{t}}\) is the density of Wt. Letting ε = 1, we have

$$ \begin{array}{@{}rcl@{}} E[ g_{k}({X_{t}^{x}}) J_{t}^{i,k} ]&=&E[ g_{k}(\bar{X}_{t}^{x}) ] {\delta_{k}^{i}} + E[ g_{k}(\bar{X}_{t}^{x}) \sum\limits_{j=1}^{d} \partial_{i}{V^{k}_{j}}(x) {W_{t}^{j}} ] \\ && + E[ g_{k}({X_{t}^{x}}) \sum\limits_{e=1}^{N} \sum\limits_{j_{1},j_{2},j_{3}=1}^{d} L_{j_{1}}V_{j_{2}}^{\ell}(x)V_{j_{3}}^{e}(x)A_{\ell e}(x)\\ && \frac{1}{2t}\{ W^{j_{1}}_{t}W^{j_{2}}_{t}W^{j_{3}}_{t} -W^{j_{1}}_{t} t \textbf{1}_{j_{2}=j_{3}} -W^{j_{2}}_{t} t \textbf{1}_{j_{1}=j_{3}} -W^{j_{3}}_{t} t \textbf{1}_{j_{1}=j_{2}} \} ]{\delta_{k}^{i}}\\ && + \mathscr{R}(t,x), \end{array} $$

where

$$ \begin{array}{@{}rcl@{}} \mathscr{R}(t,x)={{\int}_{0}^{1}} (1-u) E[ g_{k}(\hat{X}_{t}^{x,u}) F^{u}(t,x) ) ] du \end{array} $$

with \(\hat {X}_{t}^{x,\lambda }=x+V_{0}(x)t+Y_{t}^{x,\lambda }\), λ ∈ [0, 1], which satisfies \(\textstyle {\sup _{x \in \mathbb {R}^{N}}| {\mathscr{R}}(t,x) |\leq C \| g \|_{\infty } t}\) for some C > 0 independent of the function g and t > 0 obtained by the estimate (F.6).

Therefore, we get

$$ \begin{array}{@{}rcl@{}} \sup_{x \in \mathbb{R}^{N}} \Big|\frac{\partial}{\partial x_{i}}E[ f({X_{t}^{x}}) ]-\sum\limits_{k=1}^{N} \widehat{Q_{t}^{J,(i,k)}} g_{k}(x) \Big| \leq C \| g \|_{\infty} t. \end{array} $$
(F.9)

Appendix 7.: Proof of Lemma 1

We have already seen that (5.28) holds, i.e., for smooth \(\varphi :\mathbb {R}^{N} \rightarrow \mathbb {R}\),

$$ \begin{array}{@{}rcl@{}} \| P_{s}^{J,(i,j)}\varphi - Q_{s}^{J,(i,j)}\varphi \|_{\infty} \leq C \sum\limits_{e=1}^{4}\|\nabla^{e} \varphi\|_{\infty} s^{3}. \end{array} $$
(G.1)

We note even if gk is not smooth (only bounded and measurable), xPJ,(j,k)gk(x) is smooth. To finish the proof, we will give the upper bound of

$$ \begin{array}{@{}rcl@{}} \|\nabla^{e} P_{T-t}^{J,(j,k)}g_{k} \|_{\infty}, \text{for} e=1,\cdots,4. \end{array} $$

The expectation \(P_{t}^{J,(j,k)}g_{k}(x)\) has the following form:

$$ \begin{array}{@{}rcl@{}} E[ g_{k}({X_{t}^{x}}) G(t,x) ] = {\int}_{\mathbb{R}^{N}} g_{k} (y) \langle \delta_{y} ({X_{t}^{x}}), G(t,x) \rangle dy, \end{array} $$

for a Wiener fucntional \(G(t,x) \in \mathbb {D}^{\infty }\) such that for all m ≥ 1, \(p \in [1,\infty )\) and and multi-index α, there is C > 0 such that \(\textstyle {\sup _{x \in \mathbb {R}^{N}}\|\frac {\partial ^{\alpha }}{\partial x^{\alpha }}G(t,x) \|_{m,p} \leq C}\) for all t ∈ (0,T]. Also remark that we have that for all m ≥ 1, \(p \in [1,\infty )\) and and multi-index α, there is C > 0 such that \(\textstyle {\sup _{x \in \mathbb {R}^{N}}\|\frac {\partial ^{\alpha }}{\partial x^{\alpha }}{X_{t}^{x}} \|_{m,p} \leq C}\) for all t ∈ (0,T]. By the integration by parts, we have

$$ \begin{array}{@{}rcl@{}} \frac{\partial^{\alpha}}{\partial x^{\alpha}} \langle \delta_{y} ({X_{t}^{x}}), G(t,x) \rangle&=&\sum\limits_{m=1}^{|\alpha|} \sum\limits_{\ell=1}^{p(m)} \sum\limits_{{\upbeta}^{\ell} \in \{1,\cdots,N \}^{m}} \langle \partial^{{\upbeta}^{\ell}} \delta_{y} ({X_{t}^{x}}), \widetilde{G}_{{\upbeta}^{\ell}} (t,x) \rangle\\ &=&\sum\limits_{m=1}^{|\alpha|} \sum\limits_{\ell=1}^{p(m)} \sum\limits_{{\upbeta}^{\ell} \in \{1,\cdots,N \}^{m}} \langle \delta_{y} ({X_{t}^{x}}), H_{{\upbeta}^{\ell}} ({X_{t}^{x}}, \widetilde{G}_{{\upbeta}^{\ell}} (t,x) ) \rangle, \end{array} $$

for some \(p(m) \in \mathbb {N}\), β ∈{1,⋯ ,N}m and \(\widetilde {G}_{{\upbeta }^{\ell }} (t,x) \in \mathbb {D}^{\infty }\) for m = 1,⋯ ,|α| and = 1,⋯ ,p(m). Here, \(\widetilde {G}_{\upbeta } (t,x)\), β ∈{1,⋯ ,N}m, m ≤|α| are given by some products of the functionals \(\textstyle {\frac {\partial ^{\gamma }}{\partial x^{\gamma }}G(t,x)}\), γ ∈{1,⋯ ,N}k, k ≤|α|. The weight \(H_{\upbeta } ({X_{t}^{x}}, \widetilde {G}_{\upbeta } (t,x) )\) for such \(\widetilde {G}_{\upbeta } (t,x)\) is estimated through the result in Kusuoka and Stroock [7] (1984) as

$$ \begin{array}{@{}rcl@{}} {\sup_{x \in \mathbb{R}^{N}}\| H_{\upbeta} ({X_{t}^{x}}, \widetilde{G}_{\upbeta} (t,x) ) \|_{p} \leq C t^{-|\upbeta|/2}}, p\geq 1, \end{array} $$

for some C > 0 independent of t > 0. Therefore, we can choose a constant C > 0 independent of t > 0 and g such that

$$ \begin{array}{@{}rcl@{}} \Big|\frac{\partial^{\alpha}}{\partial x^{\alpha}} E[ g_{k}({X_{t}^{x}}) G(t,x) ] \Big| \leq C \| g \|_{\infty} \sum\limits_{m=1}^{|\alpha|} t^{-m/2}, \end{array} $$

further we get

$$ \begin{array}{@{}rcl@{}} \|\nabla^{e} P_{T-t}^{J,(j,k)}g_{k} \|_{\infty} \leq C \| g \|_{\infty} \sum\limits_{m=1}^{e} \frac{1}{(T-t)^{m/2}}, \text{for} e=1,\cdots,4. \end{array} $$
(G.2)

By (G.1) and (G.2), finally, we have

$$ \begin{array}{@{}rcl@{}} \| P_{s}^{J,(i,j)}P_{T-t}^{J,(j,k)}g_{k} - Q_{s}^{J,(i,j)}P_{T-t}^{J,(j,k)}g_{k} \|_{\infty} \leq C \| g \|_{\infty} \sum\limits_{e=1}^{4} \frac{s^{3}}{(T-t)^{e/2}}, \end{array} $$

for some C > 0 independent of s,t ∈ (0,T) and the function g.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tokutome, K., Yamada, T. Acceleration of automatic differentiation of solutions to parabolic partial differential equations: a higher order discretization. Numer Algor 86, 593–635 (2021). https://doi.org/10.1007/s11075-020-00902-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-020-00902-z

Keywords

Navigation