当前位置: X-MOL 学术Comput. Math. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A machine-learning minimal-residual (ML-MRes) framework for goal-oriented finite element discretizations
Computers & Mathematics with Applications ( IF 2.9 ) Pub Date : 2020-09-09 , DOI: 10.1016/j.camwa.2020.08.012
Ignacio Brevis , Ignacio Muga , Kristoffer G. van der Zee

We introduce the concept of machine-learning minimal-residual (ML-MRes) finite element discretizations of partial differential equations (PDEs), which resolve quantities of interest with striking accuracy, regardless of the underlying mesh size. The methods are obtained within a machine-learning framework during which the parameters defining the method are tuned against available training data. In particular, we use a probably stable parametric Petrov–Galerkin method that is equivalent to a minimal-residual formulation using a weighted norm. While the trial space is a standard finite element space, the test space has parameters that are tuned in an off-line stage. Finding the optimal test space therefore amounts to obtaining a goal-oriented discretization that is completely tailored towards the quantity of interest. We use an artificial neural network to define the parametric family of test spaces. Using numerical examples for the Laplacian and advection equation in one and two dimensions, we demonstrate that the ML-MRes finite element method has superior approximation of quantities of interest even on very coarse meshes.



中文翻译:

面向目标的有限元离散化的机器学习最小残差(ML-MRes)框架

我们介绍了偏微分方程(PDE)的机器学习最小残差(ML-MRes)有限元离散化的概念,无论潜在的网格大小如何,它都能以惊人的精度解析出感兴趣的数量。这些方法是在机器学习框架中获得的,在此框架中,针对该方法的参数针对可用的训练数据进行了调整。特别是,我们使用可能稳定的参数Petrov-Galerkin方法,该方法等效于使用加权范数的最小残差公式。试用空间是标准的有限元空间,而测试空间的参数则在离线阶段进行了调整。因此,找到最佳测试空间就等于获得了针对目标量的,完全针对目标量定制的离散化方法。我们使用人工神经网络来定义测试空间的参数族。使用一维和二维Laplacian和对流方程的数值示例,我们证明了ML-MRes有限元方法即使在非常粗糙的网格上也具有优良的目标量近似值。

更新日期:2020-09-10
down
wechat
bug