当前位置: X-MOL 学术J. Comput. Phys. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Weak form Theory-guided Neural Network (TgNN-wf) for deep learning of subsurface single- and two-phase flow
Journal of Computational Physics ( IF 4.1 ) Pub Date : 2021-03-24 , DOI: 10.1016/j.jcp.2021.110318
Rui Xu , Dongxiao Zhang , Miao Rong , Nanzhe Wang

Deep neural networks (DNNs) are widely used as surrogate models, and incorporating theoretical guidance into DNNs has improved generalizability. However, most such approaches define the loss function based on the strong form of conservation laws (via partial differential equations, PDEs), which is subject to diminished accuracy when the PDE has high-order derivatives or the solution has strong discontinuities. Herein, we propose a weak form theory-guided neural network (TgNN-wf), which incorporates the weak form residual of the PDE into the loss function, combined with data constraint and initial and boundary condition regularizations, to overcome the aforementioned difficulties. The original loss minimization problem is reformulated into a Lagrangian duality problem, so that the weights of the terms in the loss function are optimized automatically. We use domain decomposition with locally-defined test functions, which captures local discontinuity effectively. Two numerical cases demonstrate the superiority of the proposed TgNN-wf over the strong form TgNN, including hydraulic head prediction for unsteady-state 2D single-phase flow problems and saturation profile prediction for 1D two-phase flow problems. Results show that TgNN-wf consistently achieves higher accuracy than TgNN, especially when strong discontinuity in the parameter or solution space is present. TgNN-wf also trains faster than TgNN when the number of integration subdomains is not too large (<10,000). Furthermore, TgNN-wf is more robust to noise. Consequently, the proposed TgNN-wf paves the way for which a variety of deep learning problems in small data regimes can be solved more accurately and efficiently.



中文翻译:

弱形式理论指导神经网络(TgNN-wf)用于深度学习地下单相和两相流

深度神经网络(DNN)被广泛用作替代模型,并且将理论指导纳入DNN可以提高通用性。但是,大多数此类方法基于守恒律的强形式(通过偏微分方程,PDE)来定义损失函数,当PDE具有高阶导数或解具有强不连续性时,精度会降低。在此,我们提出了一种弱形式理论指导的神经网络(TgNN-wf),该方法将PDE的弱形式残差合并到损失函数中,并与数据约束以及初始条件和边界条件正则化相结合,以克服上述困难。原始的损失最小化问题被重新表述为拉格朗日对偶问题,因此损失函数中各项的权重会自动优化。我们将区域分解与局部定义的测试函数结合使用,从而有效捕获局部不连续性。两个数值案例证明了拟议的TgNN-wf优于强形式TgNN,包括用于非稳态2D单相流动问题的液压头预测和用于一维两相流动问题的饱和度轮廓预测。结果表明,TgNN-wf始终比TgNN达到更高的精度,尤其是在参数或解空间存在强烈不连续性的情况下。当集成子域的数量不太大(<10,000)时,TgNN-wf的训练速度也比TgNN快。此外,TgNN-wf对噪声更鲁棒。所以,

更新日期:2021-03-24
down
wechat
bug