当前位置: X-MOL 学术J. Comput. Phys. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Physics-informed machine learning for reduced-order modeling of nonlinear problems
Journal of Computational Physics ( IF 4.1 ) Pub Date : 2021-08-27 , DOI: 10.1016/j.jcp.2021.110666
Wenqian Chen , Qian Wang , Jan S. Hesthaven , Chuhua Zhang

A reduced basis method based on a physics-informed machine learning framework is developed for efficient reduced-order modeling of parametrized partial differential equations (PDEs). A feedforward neural network is used to approximate the mapping from the time-parameter to the reduced coefficients. During the offline stage, the network is trained by minimizing the weighted sum of the residual loss of the reduced-order equations, and the data loss of the labeled reduced coefficients that are obtained via the projection of high-fidelity snapshots onto the reduced space. Such a network is referred to as physics-reinforced neural network (PRNN). As the number of residual points in time-parameter space can be very large, an accurate network – referred to as physics-informed neural network (PINN) – can be trained by minimizing only the residual loss. However, for complex nonlinear problems, the solution of the reduced-order equation is less accurate than the projection of high-fidelity solution onto the reduced space. Therefore, the PRNN trained with the snapshot data is expected to have higher accuracy than the PINN. Numerical results demonstrate that the PRNN is more accurate than the PINN and a purely data-driven neural network for complex problems. During the reduced basis refinement, the PRNN may obtain higher accuracy than the direct reduced-order model based on a Galerkin projection. The online evaluation of PINN/PRNN is orders of magnitude faster than that of the Galerkin reduced-order model.



中文翻译:

用于非线性问题降阶建模的基于物理的机器学习

开发了一种基于物理信息机器学习框架的简化基方法,用于参数化偏微分方程 (PDE) 的有效降阶建模。前馈神经网络用于近似从时间参数到缩减系数的映射。在离线阶段,网络通过最小化降阶方程的残差损失的加权和以及通过将高保真快照投影到降维空间而获得的标记降维系数的数据损失的加权和来训练。这种网络被称为物理增强神经网络(PRNN)。由于时间参数空间中残差点的数量可能非常大,因此可以通过仅最小化残差损失来训练精确的网络——称为物理信息神经网络 (PINN)。然而,对于复杂的非线性问题,降阶方程的解不如将高保真解投影到缩减空间上精确。因此,预计用快照数据训练的 PRNN 比 PINN 具有更高的准确度。数值结果表明,对于复杂问题,PRNN 比 PINN 和纯数据驱动的神经网络更准确。在缩减基细化过程中,PRNN 可以获得比基于伽辽金投影的直接降阶模型更高的精度。PINN/PRNN 的在线评估比 Galerkin 降阶模型快几个数量级。使用快照数据训练的 PRNN 预计具有比 PINN 更高的准确度。数值结果表明,对于复杂问题,PRNN 比 PINN 和纯数据驱动的神经网络更准确。在缩减基细化期间,PRNN 可以获得比基于伽辽金投影的直接降阶模型更高的精度。PINN/PRNN 的在线评估比 Galerkin 降阶模型快几个数量级。使用快照数据训练的 PRNN 预计具有比 PINN 更高的准确度。数值结果表明,对于复杂问题,PRNN 比 PINN 和纯数据驱动的神经网络更准确。在缩减基细化期间,PRNN 可以获得比基于伽辽金投影的直接降阶模型更高的精度。PINN/PRNN 的在线评估比 Galerkin 降阶模型快几个数量级。

更新日期:2021-09-06
down
wechat
bug