当前位置: X-MOL 学术Int. J. Uncertain. Quantif. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Kernel optimization for Low-Rank Multi-Fidelity Algorithms
International Journal for Uncertainty Quantification ( IF 1.5 ) Pub Date : 2020-01-01 , DOI: 10.1615/int.j.uncertaintyquantification.2020033212
Mani Razi , Robert Mike Kirby , Akil Narayan

One of the major challenges for low-rank multi-fidelity (MF) approaches is the assumption that low-fidelity (LF) and high-fidelity (HF) models admit “similar” low-rank kernel representations. Low-rank MF methods have traditionally attempted to exploit low-rank representations of linear kernels, which are kernel functions of the form K(u, v) = vu for vectors u and v. However, such linear kernels may not be able to capture low-rank behavior, and they may admit LF and HF kernels that are not similar. Such a situation renders a naive approach to low-rank MF procedures ineffective. In this paper, we propose a novel approach for the selection of a near-optimal kernel function for use in low-rank MF methods. The proposed framework is a two-step strategy wherein: (1) hyperparameters of a library of kernel functions are optimized, and (2) a particular combination of the optimized kernels is selected, through either a convex mixture (Additive Kernel Approach) or through a data-driven optimization (Adaptive Kernel Approach). The two resulting methods for this generalized framework both utilize only the available inexpensive low-fidelity data and thus no evaluation of high-fidelity simulation model is needed until a kernel is chosen. These proposed approaches are tested on five non-trivial real-world problems including multi-fidelity surrogate modeling for oneand two-species molecular systems, gravitational many-body problem, associating polymer networks, plasmonic nano-particle arrays, and an incompressible flow in channels with stenosis. The results for these numerical experiments demonstrate the numerical stability efficiency of both proposed kernel function selection procedures, as well as high accuracy of their resultant predictive models for estimation of quantities of interest. Comparisons against standard linear kernel procedures also demonstrate increased accuracy of the optimized kernel approaches.

中文翻译:

低秩多保真算法的内核优化

低秩多保真 (MF) 方法的主要挑战之一是假设低保真 (LF) 和高保真 (HF) 模型承认“相似”的低秩内核表示。低秩 MF 方法传统上试图利用线性核的低秩表示,这些核函数是向量 u 和 v 的 K(u, v) = vu 形式的核函数。 然而,这种线性核可能无法捕获低秩行为,他们可能承认不相似的 LF 和 HF 内核。这种情况使得对低秩 MF 程序的幼稚方法无效。在本文中,我们提出了一种用于选择用于低秩 MF 方法的近乎最优核函数的新方法。所提出的框架是一个两步策略,其中:(1)优化核函数库的超参数,(2) 通过凸混合(Additive Kernel Approach)或数据驱动优化(Adaptive Kernel Approach)选择优化内核的特定组合。该通用框架的两种结果方法都仅利用可用的廉价低保真数据,因此在选择内核之前不需要评估高保真仿真模型。这些提出的方法在五个重要的现实世界问题上进行了测试,包括一个和两个物种分子系统的多保真代理建模、引力多体问题、关联聚合物网络、等离子体纳米粒子阵列和通道中的不可压缩流动与狭窄。这些数值实验的结果证明了所提出的核函数选择程序的数值稳定性效率,以及它们用于估计感兴趣数量的预测模型的高精度。与标准线性核程序的比较也证明了优化核方法的准确性提高。
更新日期:2020-01-01
down
wechat
bug