当前位置: X-MOL 学术Int. J. Numer. Meth. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adaptive greedy algorithms based on parameter‐domain decomposition and reconstruction for the reduced basis method
International Journal for Numerical Methods in Engineering ( IF 2.9 ) Pub Date : 2020-10-13 , DOI: 10.1002/nme.6544
Jiahua Jiang 1 , Yanlai Chen 2
Affiliation  

The reduced basis method (RBM) empowers repeated and rapid evaluation of parametrized partial differential equations through an offline-online decomposition, a.k.a. a learning-execution process. A key feature of the method is a greedy algorithm repeatedly scanning the training set, a fine discretization of the parameter domain, to identify the next dimension of the parameter-induced solution manifold along which we expand the surrogate solution space. Although successfully applied to problems with fairly high parametric dimensions, the challenge is that this scanning cost dominates the offline cost due to it being proportional to the cardinality of the training set which is exponential with respect to the parameter dimension. In this work, we review three recent attempts in effectively delaying this curse of dimensionality, and propose two new hybrid strategies through successive refinement and multilevel maximization of the error estimate over the training set. All five offline-enhanced methods and the original greedy algorithm are tested and compared on {two types of problems: the thermal block problem and the geometrically parameterized Helmholtz problem.

中文翻译:

基于参数域分解和重构的自适应贪婪算法的约简基方法

简化基法 (RBM) 通过离线-在线分解(也称为学习-执行过程)实现了对参数化偏微分方程的重复和快速评估。该方法的一个关键特征是贪婪算法重复扫描训练集,参数域的精细离散化,以识别参数诱导解决方案流形的下一个维度,我们沿着该维度扩展代理解决方案空间。尽管成功应用于具有相当高参数维度的问题,但挑战在于这种扫描成本在离线成本中占主导地位,因为它与训练集的基数成正比,而基数与参数维数呈指数关系。在这项工作中,我们回顾了最近三项有效延迟这种维度灾难的尝试,并通过对训练集的误差估计进行连续细化和多级最大化,提出了两种新的混合策略。所有五种离线增强方法和原始贪婪算法都在{两种类型的问题上进行了测试和比较:热块问题和几何参数化亥姆霍兹问题。
更新日期:2020-10-13
down
wechat
bug