当前位置: X-MOL 学术arXiv.cs.NA › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A learning scheme by sparse grids and Picard approximations for semilinear parabolic PDEs
arXiv - CS - Numerical Analysis Pub Date : 2021-02-24 , DOI: arxiv-2102.12051
Jean-François Chassagneux, Junchao Chen, Noufel Frikha, Chao Zhou

Relying on the classical connection between Backward Stochastic Differential Equations (BSDEs) and non-linear parabolic partial differential equations (PDEs), we propose a new probabilistic learning scheme for solving high-dimensional semi-linear parabolic PDEs. This scheme is inspired by the approach coming from machine learning and developed using deep neural networks in Han and al. [32]. Our algorithm is based on a Picard iteration scheme in which a sequence of linear-quadratic optimisation problem is solved by means of stochastic gradient descent (SGD) algorithm. In the framework of a linear specification of the approximation space, we manage to prove a convergence result for our scheme, under some smallness condition. In practice, in order to be able to treat high-dimensional examples, we employ sparse grid approximation spaces. In the case of periodic coefficients and using pre-wavelet basis functions, we obtain an upper bound on the global complexity of our method. It shows in particular that the curse of dimensionality is tamed in the sense that in order to achieve a root mean squared error of order ${\epsilon}$, for a prescribed precision ${\epsilon}$, the complexity of the Picard algorithm grows polynomially in ${\epsilon}^{-1}$ up to some logarithmic factor $ |log({\epsilon})| $ which grows linearly with respect to the PDE dimension. Various numerical results are presented to validate the performance of our method and to compare them with some recent machine learning schemes proposed in Han and al. [20] and Hur\'e and al. [37].

中文翻译:

基于稀疏网格和Picard近似的半线性抛物线PDE的学习方案

依靠后向随机微分方程(BSDE)与非线性抛物线偏微分方程(PDE)之间的经典联系,我们提出了一种新的概率学习方案,用于求解高维半线性抛物线PDE。该方案的灵感来自于机器学习方法,并在Han等人中使用深度神经网络开发。[32]。我们的算法基于Picard迭代方案,在该方案中,通过随机梯度下降(SGD)算法解决了一系列线性二次优化问题。在逼近空间的线性规范的框架内,我们设法证明了我们的方案在某些小条件下的收敛结果。在实践中,为了能够处理高维示例,我们采用了稀疏网格近似空间。在周期系数的情况下,并使用小波前基函数,我们获得了方法全局全局性的上限。它特别显示了维数的诅咒在某种意义上被驯服了:为了获得阶次均方根误差$ {\ epsilon} $,对于规定的精度$ {\ epsilon} $,Picard算法的复杂性在$ {\ epsilon} ^ {-1} $中以多项式增长,直到一个对数因子$ | log({\ epsilon})| $相对于PDE尺寸线性增长。提出了各种数值结果,以验证我们的方法的性能,并将其与Han等人提出的一些最新的机器学习方案进行比较。[20]和Hur'e等。[37]。它特别显示了维数的诅咒在某种意义上被驯服了:为了获得阶次均方根误差$ {\ epsilon} $,对于规定的精度$ {\ epsilon} $,Picard算法的复杂性在$ {\ epsilon} ^ {-1} $中以多项式增长,直到一个对数因子$ | log({\ epsilon})| $相对于PDE尺寸线性增长。提出了各种数值结果,以验证我们的方法的性能,并将其与Han等人提出的一些最新的机器学习方案进行比较。[20]和Hur'e等。[37]。它特别显示了维数的诅咒在某种意义上被驯服了:为了获得阶次均方根误差$ {\ epsilon} $,对于规定的精度$ {\ epsilon} $,Picard算法的复杂性在$ {\ epsilon} ^ {-1} $中以多项式增长,直到一个对数因子$ | log({\ epsilon})| $相对于PDE尺寸线性增长。提出了各种数值结果,以验证我们的方法的性能,并将其与Han等人提出的一些最新的机器学习方案进行比较。[20]和Hur'e等。[37]。Picard算法的复杂度在$ {\ epsilon} ^ {-1} $中呈多项式增长,直到某个对数因子$ | log({\ epsilon})| $相对于PDE尺寸线性增长。提出了各种数值结果,以验证我们的方法的性能,并将其与Han等人提出的一些最新的机器学习方案进行比较。[20]和Hur'e等。[37]。Picard算法的复杂度在$ {\ epsilon} ^ {-1} $中呈多项式增长,直到某个对数因子$ | log({\ epsilon})| $相对于PDE尺寸线性增长。提出了各种数值结果,以验证我们的方法的性能,并将其与Han等人提出的一些最新的机器学习方案进行比较。[20]和Hur'e等。[37]。
更新日期:2021-02-25
down
wechat
bug