当前位置: X-MOL 学术arXiv.cs.PF › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Train Once and Use Forever: Solving Boundary Value Problems in Unseen Domains with Pre-trained Deep Learning Models
arXiv - CS - Performance Pub Date : 2021-04-22 , DOI: arxiv-2104.10873
Hengjie Wang, Robert Planas, Aparna Chandramowlishwaran, Ramin Bostanabad

Physics-informed neural networks (PINNs) are increasingly employed to replace/augment traditional numerical methods in solving partial differential equations (PDEs). While having many attractive features, state-of-the-art PINNs surrogate a specific realization of a PDE system and hence are problem-specific. That is, each time the boundary conditions and domain shape change, the model needs to be re-trained. This limitation prohibits the application of PINNs in realistic or large-scale engineering problems especially since the costs and efforts associated with their training are considerable. This paper introduces a transferable framework for solving boundary value problems (BVPs) via deep neural networks which can be trained once and used forever for various domains of unseen sizes, shapes, and boundary conditions. First, we introduce \emph{genomic flow network} (GFNet), a neural network that can infer the solution of a BVP across arbitrary boundary conditions on a small square domain called \emph{genome}. Then, we propose \emph{mosaic flow} (MF) predictor, a novel iterative algorithm that assembles or stitches the GFNet's inferences to obtain the solution of BVPs on unseen, large domains while preserving the spatial regularity of the solution. We demonstrate that our framework can estimate the solution of Laplace and Navier-Stokes equations in domains of unseen shapes and boundary conditions that are, respectively, $1200$ and $12$ times larger than the domains where training is performed. Since our framework eliminates the need to re-train, it demonstrates up to 3 orders of magnitude speedups compared to the state-of-the-art.

中文翻译:

一次训练并永远使用:使用预先训练的深度学习模型来解决看不见的领域中的边值问题

在求解偏微分方程(PDE)时,越来越多地采用物理信息神经网络(PINN)来代替/增强传统的数值方法。尽管具有许多吸引人的功能,但最新的PINN替代了PDE系统的特定实现,因此是特定于问题的。也就是说,每当边界条件和区域形状发生变化时,都需要对模型进行重新训练。该限制禁止将PINN应用于现实或大规模工程问题,尤其是因为与之相关的培训成本和工作量很大。本文介绍了通过深层神经网络解决边值问题(BVP)的可转移框架,该框架可以进行一次训练,并永久用于各种大小,形状和边界条件未知的领域。第一的,我们介绍了\ emph {genomic flow network}(GFNet),这是一种神经网络,可以在称为\ emph {genome}的小平方域上跨任意边界条件推断BVP的解。然后,我们提出\ emph {mosaic flow}(MF)预测变量,这是一种新颖的迭代算法,可以组合或拼接GFNet的推论,以在看不见的大域上获得BVP的解,同时保留解的空间规则性。我们证明了我们的框架可以在形状和边界条件看不见的域中估计Laplace和Navier-Stokes方程的解,它们分别比进行训练的域大$ 1200 $和$ 12 $倍。由于我们的框架消除了重新训练的需要,因此与最新技术相比,它最多可显示3个数量级的加速。一个神经网络,可以在称为\ emph {genome}的小平方域上跨任意边界条件推断BVP的解。然后,我们提出\ emph {mosaic flow}(MF)预测变量,这是一种新颖的迭代算法,可以组合或拼接GFNet的推论,以在看不见的大域上获得BVP的解,同时保留解的空间规则性。我们证明了我们的框架可以在形状和边界条件看不见的域中估计Laplace和Navier-Stokes方程的解,它们分别比进行训练的域大$ 1200 $和$ 12 $倍。由于我们的框架消除了重新训练的需要,因此与最新技术相比,它最多可显示3个数量级的加速。一个神经网络,可以在称为\ emph {genome}的小平方域上跨任意边界条件推断BVP的解。然后,我们提出\ emph {mosaic flow}(MF)预测变量,这是一种新颖的迭代算法,可以组合或拼接GFNet的推论,以在看不见的大域上获得BVP的解,同时保留解的空间规则性。我们证明了我们的框架可以在形状和边界条件看不见的域中估计Laplace和Navier-Stokes方程的解,它们分别比进行训练的域大$ 1200 $和$ 12 $倍。由于我们的框架消除了重新训练的需要,因此与最新技术相比,它最多可显示3个数量级的加速。
更新日期:2021-04-23
down
wechat
bug