当前位置: X-MOL 学术arXiv.cs.NA › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep neural networks for geometric multigrid methods
arXiv - CS - Numerical Analysis Pub Date : 2021-06-14 , DOI: arxiv-2106.07687
Nils Margenberg, Robert Jendersie, Thomas Richter, Christian Lessig

We investigate scaling and efficiency of the deep neural network multigrid method (DNN-MG). DNN-MG is a novel neural network-based technique for the simulation of the Navier-Stokes equations that combines an adaptive geometric multigrid solver, i.e. a highly efficient classical solution scheme, with a recurrent neural network with memory. The neural network replaces in DNN-MG one or multiple finest multigrid layers and provides a correction for the classical solve in the next time step. This leads to little degradation in the solution quality while substantially reducing the overall computational costs. At the same time, the use of the multigrid solver at the coarse scales allows for a compact network that is easy to train, generalizes well, and allows for the incorporation of physical constraints. Previous work on DNN-MG focused on the overall scheme and how to enforce divergence freedom in the solution. In this work, we investigate how the network size affects training and solution quality and the overall runtime of the computations. Our results demonstrate that larger networks are able to capture the flow behavior better while requiring only little additional training time. At runtime, the use of the neural network correction can even reduce the computation time compared to a classical multigrid simulation through a faster convergence of the nonlinear solve that is required at every time step.

中文翻译:

用于几何多重网格方法的深度神经网络

我们研究了深度神经网络多重网格方法 (DNN-MG) 的缩放和效率。DNN-MG 是一种基于神经网络的新型 Navier-Stokes 方程模拟技术,它将自适应几何多重网格求解器(即高效的经典求解方案)与具有记忆功能的循环神经网络相结合。神经网络在 DNN-MG 中替换一个或多个最精细的多重网格层,并在下一时间步为经典求解提供修正。这导致解决方案质量几乎没有下降,同时大大降低了整体计算成本。同时,在粗尺度下使用多重网格求解器可以形成一个紧凑的网络,该网络易于训练、泛化良好,并允许结合物理约束。以前关于 DNN-MG 的工作侧重于整体方案以及如何在解决方案中实施发散自由。在这项工作中,我们研究了网络大小如何影响训练和解决方案质量以及计算的整体运行时间。我们的结果表明,更大的网络能够更好地捕捉流动行为,同时只需要很少的额外训练时间。在运行时,与经典多重网格模拟相比,神经网络校正的使用甚至可以通过每个时间步所需的非线性求解的更快收敛来减少计算时间。我们的结果表明,更大的网络能够更好地捕捉流动行为,同时只需要很少的额外训练时间。在运行时,与经典多重网格模拟相比,神经网络校正的使用甚至可以通过每个时间步所需的非线性求解的更快收敛来减少计算时间。我们的结果表明,更大的网络能够更好地捕捉流动行为,同时只需要很少的额外训练时间。在运行时,与经典多重网格模拟相比,神经网络校正的使用甚至可以通过每个时间步长所需的非线性求解的更快收敛来减少计算时间。
更新日期:2021-06-17
down
wechat
bug