当前位置: X-MOL 学术arXiv.cs.NA › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Notes on Exact Boundary Values in Residual Minimisation
arXiv - CS - Numerical Analysis Pub Date : 2021-05-06 , DOI: arxiv-2105.02550
Johannes Müller, Marius Zeinhofer

We analyse the difference in convergence mode using exact versus penalised boundary values for the residual minimisation of PDEs with neural network type ansatz functions, as is commonly done in the context of physics informed neural networks. It is known that using an $L^2$ boundary penalty leads to a loss of regularity of $3/2$ meaning that approximation in $H^2$ yields a priori estimates in $H^{1/2}$. These notes demonstrate how this loss of regularity can be circumvented if the functions in the ansatz class satisfy the boundary values exactly. Furthermore, it is shown that in this case, the loss function provides a consistent a posteriori error estimator in $H^2$ norm made by the residual minimisation method. We provide analogue results for linear time dependent problems and discuss the implications of measuring the residual in Sobolev norms.

中文翻译:

关于残差最小化的精确边界值的注意事项

我们使用精确的和罚分的边界值分析收敛模式的差异,以使残差最小化具有神经网络类型ansatz函数的PDE的残差最小化,这通常是在物理知识神经网络的上下文中进行的。已知使用$ L ^ 2 $边界罚分会导致$ 3/2 $的规则性损失,这意味着$ H ^ 2 $的近似值会产生$ H ^ {1/2} $的先验估计。这些注释说明了如果ansatz类中的函数完全满足边界值,可以如何避免这种规律性损失。此外,表明在这种情况下,损失函数在通过残差最小化方法制成的$ H ^ 2 $范数中提供了一致的后验误差估计量。我们为线性时间相关问题提供了模拟结果,并讨论了测量Sobolev范数中的残差的含义。
更新日期:2021-05-07
down
wechat
bug