当前位置: X-MOL 学术J. Comput. Phys. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DPM: A deep learning PDE augmentation method with application to large-eddy simulation
Journal of Computational Physics ( IF 4.1 ) Pub Date : 2020-09-03 , DOI: 10.1016/j.jcp.2020.109811
Justin Sirignano , Jonathan F. MacArt , Jonathan B. Freund

A framework is introduced that leverages known physics to reduce overfitting in machine learning for scientific applications. The partial differential equation (PDE) that expresses the physics is augmented with a neural network that uses available data to learn a description of the corresponding unknown or unrepresented physics. Training within this combined system corrects for missing, unknown, or erroneously represented physics, including discretization errors associated with the PDE's numerical solution. For optimization of the network within the PDE, an adjoint PDE is solved to provide high-dimensional gradients, and a stochastic adjoint method (SAM) generalization of stochastic gradient descent further accelerates training. The approach is demonstrated for large-eddy simulation (LES) of turbulence. High-fidelity direct numerical simulations (DNS) of decaying isotropic turbulence provide the training data used to learn sub-filter-scale closures for the filtered Navier–Stokes equations. Out-of-sample comparisons show that the DPM outperforms widely-used models, even for filter sizes so large that they become qualitatively incorrect. It also significantly outperforms the same neural network when a priori trained based on simple data mismatch, not accounting for the full PDE. Measures of discretization errors, which are well-known to be consequential in LES, point to the importance of the unified training formulation's design, which without modification corrects for them. For comparable accuracy, simulation runtime is significantly reduced. A relaxation of the typical discrete enforcement of the divergence-free constraint in the solver is also successful, instead allowing the DPM to approximately enforce incompressibility physics. Since the training loss function is not restricted to correspond directly to the closure to be learned, training can incorporate diverse data, including experimental data.



中文翻译:

DPM:一种深度学习PDE增强方法,应用于大涡模拟

引入了一个框架,该框架利用已知的物理学来减少机器学习对科学应用的过度拟合。表达物理的偏微分方程(PDE)通过神经网络进行了扩充,该神经网络使用可用数据来学习对相应未知或无代表物理的描述。在此组合系统内进行的训练可纠正丢失,未知或错误表示的物理现象,包括与PDE数值解相关的离散化误差。为了优化PDE中的网络,对伴随PDE进行求解以提供高维梯度,并且对随机梯度下降的随机伴随方法(SAM)进行归纳进一步加快了训练速度。该方法已用于湍流的大涡模拟(LES)。各向同性湍流衰减的高保真直接数值模拟(DNS)提供了训练数据,用于学习已过滤的Navier–Stokes方程的子过滤器尺度闭合。样本外比较显示,DPM的性能优于广泛使用的模型,即使对于过大的过滤器,以至于在质量上会变得不正确。在以下情况下,它也明显优于同一个神经网络根据简单的数据不匹配进行先验训练,而不考虑完整的PDE。在LES中众所周知,离散化误差的度量是结果,它指出了统一训练公式设计的重要性,无需进行修正即可对其进行校正。为了达到可比的精度,可以大大减少仿真运行时间。放松求解器中无散度约束的典型离散实施效果也是成功的,而是允许DPM近似实施不可压缩物理学。由于训练损失函数不限于直接对应于要学习的闭包,因此训练可以合并各种数据,包括实验数据。

更新日期:2020-09-03
down
wechat
bug