当前位置: X-MOL 学术arXiv.cs.NA › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Deep Learning Galerkin Method for the General Stokes Equations
arXiv - CS - Numerical Analysis Pub Date : 2020-09-18 , DOI: arxiv-2009.11701
Jian Li, Jing Yue, Wen Zhang and Wansuo Duan

The finite element method, finite difference method, finite volume method and spectral method have achieved great success in solving partial differential equations. However, the high accuracy of traditional numerical methods is at the cost of high efficiency. Especially in the face of high-dimensional problems, the traditional numerical methods are often not feasible in the subdivision of high-dimensional meshes and the differentiability and integrability of high-order terms. In deep learning, neural network can deal with high-dimensional problems by adding the number of layers or expanding the number of neurons. Compared with traditional numerical methods, it has great advantages. In this article, we consider the Deep Galerkin Method (DGM) for solving the general Stokes equations by using deep neural network without generating mesh grid. The DGM can reduce the computational complexity and achieve the competitive results. Here, depending on the L2 error we construct the objective function to control the performance of the approximation solution. Then, we prove the convergence of the objective function and the convergence of the neural network to the exact solution. Finally, the effectiveness of the proposed framework is demonstrated through some numerical experiments.

中文翻译:

一般斯托克斯方程的深度学习伽辽金方法

有限元法、有限差分法、有限体积法和谱法在求解偏微分方程方面取得了很大的成功。然而,传统数值方法的高精度是以高效率为代价的。尤其是面对高维问题,传统的数值方法在高维网格的细分和高阶项的可微性和可积性等问题上往往行不通。在深度学习中,神经网络可以通过增加层数或扩大神经元数量来处理高维问题。与传统的数值方法相比,它具有很大的优势。在本文中,我们考虑使用深度神经网络在不生成网格的情况下求解一般斯托克斯方程的深度伽辽金方法 (DGM)。DGM 可以降低计算复杂度并获得有竞争力的结果。在这里,根据 L2 误差,我们构建了目标函数来控制近似解的性能。然后,我们证明了目标函数的收敛性和神经网络对精确解的收敛性。最后,通过一些数值实验证明了所提出框架的有效性。
更新日期:2020-09-25
down
wechat
bug