当前位置: X-MOL 学术J. Comput. Phys. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Recurrent neural network closure of parametric POD-Galerkin reduced-order models based on the Mori-Zwanzig formalism
Journal of Computational Physics ( IF 3.8 ) Pub Date : 2020-03-12 , DOI: 10.1016/j.jcp.2020.109402
Qian Wang , Nicolò Ripamonti , Jan S. Hesthaven

Closure modeling based on the Mori-Zwanzig formalism has proven effective to improve the stability and accuracy of projection-based model order reduction. However, closure models are often expensive and infeasible for complex nonlinear systems. Towards efficient model reduction of general problems, this paper presents a recurrent neural network (RNN) closure of parametric POD-Galerkin reduced-order model. Based on the short time history of the reduced-order solutions, the RNN predicts the memory integral which represents the impact of the unresolved scales on the resolved scales. A conditioned long short term memory (LSTM) network is utilized as the regression model of the memory integral, in which the POD coefficients at a number of time steps are fed into the LSTM units, and the physical/geometrical parameters are fed into the initial hidden state of the LSTM. The reduced-order model is integrated in time using an implicit-explicit (IMEX) Runge-Kutta scheme, in which the memory term is integrated explicitly and the remaining right-hand-side term is integrated implicitly to improve the computational efficiency. Numerical results demonstrate that the RNN closure can significantly improve the accuracy and efficiency of the POD-Galerkin reduced-order model of nonlinear problems. The POD-Galerkin reduced-order model with the RNN closure is also shown to be capable of making accurate predictions, well beyond the time interval of the training data.



中文翻译:

基于Mori-Zwanzig形式主义的参数POD-Galerkin降阶模型的递归神经网络闭合

事实证明,基于Mori-Zwanzig形式的闭包建模可有效提高基于投影的模型降阶的稳定性和准确性。但是,闭合模型通常很昂贵,并且对于复杂的非线性系统而言是不可行的。为了有效地减少一般问题的模型,本文提出了参数POD-Galerkin降阶模型的递归神经网络(RNN)闭包。基于降阶解的短时间历史,RNN预测记忆积分,该记忆积分表示未解决尺度对解决尺度的影响。使用条件长期短期记忆(LSTM)网络作为记忆积分的回归模型,其中将多个时间步长的POD系数馈入LSTM单元,并将物理/几何参数输入到LSTM的初始隐藏状态。降阶模型使用隐式显式(IMEX)Runge-Kutta方案及时集成,其中显式地集成了内存项,而隐式地集成了剩余的右侧项以提高计算效率。数值结果表明,RNN闭包可以显着提高POD-Galerkin降阶非线性问题模型的准确性和效率。具有RNN闭包的POD-Galerkin降阶模型也被证明能够做出准确的预测,远远超出训练数据的时间间隔。其中显式地集成了存储项,而其余的右侧项则被隐式地集成以提高计算效率。数值结果表明,RNN闭包可以显着提高POD-Galerkin降阶非线性问题模型的准确性和效率。具有RNN闭包的POD-Galerkin降阶模型也被证明能够做出准确的预测,远远超出训练数据的时间间隔。其中显式地集成了存储项,而其余的右侧项则被隐式地集成以提高计算效率。数值结果表明,RNN闭包可以显着提高POD-Galerkin降阶非线性问题模型的准确性和效率。具有RNN闭包的POD-Galerkin降阶模型也被证明能够做出准确的预测,远远超出训练数据的时间间隔。

更新日期:2020-03-12
down
wechat
bug