当前位置:
X-MOL 学术
›
arXiv.cs.NA
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Neural network guided adjoint computations in dual weighted residual error estimation
arXiv - CS - Numerical Analysis Pub Date : 2021-02-24 , DOI: arxiv-2102.12450 Julian Roth, Max Schröder, Thomas Wick
arXiv - CS - Numerical Analysis Pub Date : 2021-02-24 , DOI: arxiv-2102.12450 Julian Roth, Max Schröder, Thomas Wick
In this work, we are concerned with neural network guided goal-oriented a
posteriori error estimation and adaptivity using the dual weighted residual
method. The primal problem is solved using classical Galerkin finite elements.
The adjoint problem is solved in strong form with a feedforward neural network
using two or three hidden layers. The main objective of our approach is to
explore alternatives for solving the adjoint problem with greater potential of
a numerical cost reduction. The proposed algorithm is based on the general
goal-oriented error estimation theorem including both linear and nonlinear
stationary partial differential equations and goal functionals. Our
developments are substantiated with some numerical experiments that include
comparisons of neural network computed adjoints and classical finite element
solutions of the adjoints. In the programming software, the open-source library
deal.II is successfully coupled with LibTorch, the PyTorch C++ application
programming interface.
中文翻译:
双重加权残差估计中的神经网络引导伴随计算
在这项工作中,我们关注的是使用双重加权残差法的神经网络指导的目标后验误差估计和适应性。使用经典的Galerkin有限元解决了原始问题。通过使用两个或三个隐藏层的前馈神经网络以强形式解决了伴随问题。我们方法的主要目标是探索解决伴随问题的替代方法,其具有降低数字成本的更大潜力。该算法基于通用的面向目标的误差估计定理,该定理包括线性和非线性平稳偏微分方程以及目标函数。我们的发展得到了一些数值实验的证实,这些实验包括对神经网络计算的伴随项和伴随项的经典有限元解进行比较。
更新日期:2021-02-25
中文翻译:
双重加权残差估计中的神经网络引导伴随计算
在这项工作中,我们关注的是使用双重加权残差法的神经网络指导的目标后验误差估计和适应性。使用经典的Galerkin有限元解决了原始问题。通过使用两个或三个隐藏层的前馈神经网络以强形式解决了伴随问题。我们方法的主要目标是探索解决伴随问题的替代方法,其具有降低数字成本的更大潜力。该算法基于通用的面向目标的误差估计定理,该定理包括线性和非线性平稳偏微分方程以及目标函数。我们的发展得到了一些数值实验的证实,这些实验包括对神经网络计算的伴随项和伴随项的经典有限元解进行比较。