当前位置: X-MOL 学术Comput. Methods Appl. Mech. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
POD-DL-ROM: Enhancing deep learning-based reduced order models for nonlinear parametrized PDEs by proper orthogonal decomposition
Computer Methods in Applied Mechanics and Engineering ( IF 7.2 ) Pub Date : 2021-10-13 , DOI: 10.1016/j.cma.2021.114181
Stefania Fresca 1 , Andrea Manzoni 1
Affiliation  

Deep learning-based reduced order models (DL-ROMs) have been recently proposed to overcome common limitations shared by conventional reduced order models (ROMs) – built, e.g., through proper orthogonal decomposition (POD) – when applied to nonlinear time-dependent parametrized partial differential equations (PDEs). These might be related to (i) the need to deal with projections onto high dimensional linear approximating trial manifolds, (ii) expensive hyper-reduction strategies, or (iii) the intrinsic difficulty to handle physical complexity with a linear superimposition of modes. All these aspects are avoided when employing DL-ROMs, which learn in a non-intrusive way both the nonlinear trial manifold and the reduced dynamics, by relying on deep (e.g., feedforward, convolutional, autoencoder) neural networks. Although extremely efficient at testing time, when evaluating the PDE solution for any new testing-parameter instance, DL-ROMs require an expensive training stage, because of the extremely large number of network parameters to be estimated. In this paper we propose a possible way to avoid an expensive training stage of DL-ROMs, by (i) performing a prior dimensionality reduction through POD, and (ii) relying on a multi-fidelity pretraining stage, where different physical models can be efficiently combined. The proposed POD-DL-ROM is tested on several (both scalar and vector, linear and nonlinear) time-dependent parametrized PDEs (such as, e.g., linear advection–diffusion–reaction, nonlinear diffusion–reaction, nonlinear elastodynamics, and Navier–Stokes equations) to show the generality of this approach and its remarkable computational savings.



中文翻译:

POD-DL-ROM:通过适当的正交分解增强基于深度学习的非线性参数化偏微分方程的降阶模型

最近提出了基于深度学习的降阶模型 (DL-ROM) 来克服传统降阶模型 (ROM) 共有的限制——例如,通过适当的正交分解 (POD) 构建——当应用于非线性时间相关参数化时偏微分方程 (PDE)。这些可能与(i)需要处理高维线性近似试验流形上的投影,(ii)昂贵的超约简策略,或(iii)通过模式的线性叠加来处理物理复杂性的内在困难。当使用 DL-ROM 时,所有这些方面都可以避免,DL-ROM 依靠深度(例如,前馈、卷积、自动编码器)神经网络以非侵入方式学习非线性试验流形和减少的动态。尽管在测试时非常有效,但在为任何新的测试参数实例评估 PDE 解决方案时,DL-ROM 需要昂贵的训练阶段,因为要估计的网络参数数量非常多。在本文中,我们提出了一种避免昂贵的 DL-ROM 训练阶段的可能方法,方法是(i)通过 POD 执行先验降维,以及(ii)依靠多保真预训练阶段,可以有效地组合不同的物理模型。提议的 POD-DL-ROM 在几个(标量和矢量,线性和非线性)时间相关的参数化偏微分方程(例如,线性对流-扩散-反应、非线性扩散-反应、非线性弹性动力学和 Navier-斯托克斯方程)来展示这种方法的普遍性及其显着的计算节省。

更新日期:2021-10-13
down
wechat
bug