当前位置: X-MOL 学术Mach. Learn. Sci. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Differentiable programming for online training of a neural artificial viscosity function within a staggered grid Lagrangian hydrodynamics scheme
Machine Learning: Science and Technology ( IF 6.3 ) Pub Date : 2021-03-02 , DOI: 10.1088/2632-2153/abd644
Pake Melland 1 , Jason Albright 2 , Nathan M Urban 3, 4
Affiliation  

Lagrangian methods to solve the inviscid Euler equations produce numerical oscillations near shock waves. A common approach to reducing these oscillations is to add artificial viscosity (AV) to the discrete equations. The AV term acts as a dissipative mechanism that attenuates oscillations by smearing the shock across a finite number of computational cells. However, AV introduces several control parameters that are not determined by the underlying physical model, and hence, in practice are tuned to the characteristics of a given problem. We seek to improve the standard quadratic-linear AV form by replacing it with a learned neural function that reduces oscillations relative to exact solutions of the Euler equations, resulting in a hybrid numerical-neural hydrodynamic solver. Because AV is an artificial construct that exists solely to improve the numerical properties of a hydrodynamic code, there is no offline ‘viscosity data’ against which a neural network can be trained before inserting into a numerical simulation, thus requiring online training. We achieve this via differentiable programming, i.e. end-to-end backpropagation or adjoint solution through both the neural and differential equation code, using automatic differentiation of the hybrid code in the Julia programming language to calculate the necessary loss function gradients. A novel offline pre-training step accelerates training by initializing the neural network to the default numerical AV scheme, which can be learned rapidly by space-filling sampling over the AV input space. We find that online training over early time steps of simulation is sufficient to learn a neural AV function that reduces numerical oscillations in long-term hydrodynamic shock simulations. These results offer an early proof-of-principle that online differentiable training of hybrid numerical schemes with novel neural network components can improve certain performance aspects existing in purely numerical schemes.



中文翻译:

在线学习交错网格拉格朗日流体力学方案中的神经人工黏度函数的可微编程

拉格朗日方法求解无粘性的欧拉方程,在冲击波附近产生数值振荡。减少这些振荡的常用方法是在离散方程式中增加人工粘度(AV)。AV术语是一种耗散机制,通过在有限数量的计算单元上涂抹冲击来衰减振荡。但是,AV引入了一些不受基础物理模型确定的控制参数,因此,在实践中已将其调整为给定问题的特征。我们寻求通过用学习的神经函数代替标准的二次线性AV形式来改进标准二次线性AV形式,该形式相对于Euler方程的精确解减小振荡,从而产生混合的数值神经流体动力求解器。由于AV是仅用于改善流体力学代码的数字特性而存在的人工构造,因此没有脱机的“粘度数据”,在将其插入到数值模拟之前,可以针对该脱机的“粘性数据”对神经网络进行训练,因此需要进行在线训练。我们通过微分编程,即通过神经和微分方程代码的端到端反向传播或伴随解,使用Julia编程语言中混合代码的自动微分来计算必要的损失函数梯度,来实现这一目标。一种新颖的离线预训练步骤通过将神经网络初始化为默认的数字AV方案来加速训练,可以通过在AV输入空间上进行空间填充采样来快速学习。我们发现,在模拟的早期步骤中进行在线培训足以学习神经AV功能,该功能可减少长期水动力冲击模拟中的数值振荡。这些结果提供了一种早期的原理证明,即具有新颖的神经网络组件的混合数值方案的在线微分训练可以改善纯数值方案中存在的某些性能方面。

更新日期:2021-03-02
down
wechat
bug