Journal of Computational Physics ( IF 4.1 ) Pub Date : 2021-04-15 , DOI: 10.1016/j.jcp.2021.110364 Christopher J Arthurs 1 , Andrew P King 1
The goal of this work is to train a neural network which approximates solutions to the Navier-Stokes equations across a region of parameter space, in which the parameters define physical properties such as domain shape and boundary conditions. The contributions of this work are threefold:
- 1.
To demonstrate that neural networks can be efficient aggregators of whole families of parameteric solutions to physical problems, trained using data created with traditional, trusted numerical methods such as finite elements. Advantages include extremely fast evaluation of pressure and velocity at any point in physical and parameter space (asymptotically, 3 μs / query), and data compression (the network requires 99% less storage space compared to its own training data).
- 2.
To demonstrate that the neural networks can accurately interpolate between finite element solutions in parameter space, allowing them to be instantly queried for pressure and velocity field solutions to problems for which traditional simulations have never been performed.
- 3.
To introduce an active learning algorithm, so that during training, a finite element solver can automatically be queried to obtain additional training data in locations where the neural network's predictions are in most need of improvement, thus autonomously acquiring and efficiently distributing training data throughout parameter space.
中文翻译:
主动训练基于物理的神经网络来聚合和内插 Navier-Stokes 方程的参数解
这项工作的目标是训练一个神经网络,该网络在参数空间区域内逼近 Navier-Stokes 方程的解,其中参数定义物理属性,例如域形状和边界条件。这项工作的贡献有三方面:
- 1.
为了证明神经网络可以成为物理问题参数解的整个系列的有效聚合器,使用传统的、可信的数值方法(如有限元)创建的数据进行训练。优势包括对物理和参数空间中任意点的压力和速度的极快评估(渐近,3 μs/查询),以及数据压缩(与自己的训练数据相比,网络需要的存储空间减少 99%)。
- 2.
为了证明神经网络可以在参数空间中的有限元解之间准确地进行插值,从而可以立即查询它们的压力和速度场解,以解决传统模拟从未执行过的问题。
- 3.
引入主动学习算法,在训练过程中,可以自动查询有限元求解器,在神经网络预测最需要改进的位置获取额外的训练数据,从而在整个参数空间中自主获取和高效分配训练数据.