当前位置: X-MOL 学术arXiv.math.ST › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Bayesian quadrature for $H^1(μ)$ with Poincaré inequality on a compact interval
arXiv - MATH - Statistics Theory Pub Date : 2022-07-29 , DOI: arxiv-2207.14564
Olivier RoustantGdR MASCOT-NUM, INSA Toulouse, IMT, Nora LüthenIMT, Fabrice GamboaIMT

Motivated by uncertainty quantification of complex systems, we aim at finding quadrature formulas of the form $\int_a^b f(x) d\mu(x) = \sum_{i=1}^n w_i f(x_i)$ where $f$ belongs to $H^1(\mu)$. Here, $\mu$ belongs to a class of continuous probability distributions on $[a, b] \subset \mathbb{R}$ and $\sum_{i=1}^n w_i \delta_{x_i}$ is a discrete probability distribution on $[a, b]$. We show that $H^1(\mu)$ is a reproducing kernel Hilbert space with a continuous kernel $K$, which allows to reformulate the quadrature question as a Bayesian (or kernel) quadrature problem. Although $K$ has not an easy closed form in general, we establish a correspondence between its spectral decomposition and the one associated to Poincar\'e inequalities, whose common eigenfunctions form a $T$-system (Karlin and Studden, 1966). The quadrature problem can then be solved in the finite-dimensional proxy space spanned by the first eigenfunctions. The solution is given by a generalized Gaussian quadrature, which we call Poincar\'e quadrature. We derive several results for the Poincar\'e quadrature weights and the associated worst-case error. When $\mu$ is the uniform distribution, the results are explicit: the Poincar\'e quadrature is equivalent to the midpoint (rectangle) quadrature rule. Its nodes coincide with the zeros of an eigenfunction and the worst-case error scales as $\frac{b-a}{2\sqrt{3}}n^{-1}$ for large $n$. By comparison with known results for $H^1(0,1)$, this shows that the Poincar\'e quadrature is asymptotically optimal. For a general $\mu$, we provide an efficient numerical procedure, based on finite elements and linear programming. Numerical experiments provide useful insights: nodes are nearly evenly spaced, weights are close to the probability density at nodes, and the worst-case error is approximately $O(n^{-1})$ for large $n$.

中文翻译:

$H^1(μ)$ 的贝叶斯求积与紧区间上的庞加莱不等式

受复杂系统不确定性量化的启发,我们的目标是找到形式为 $\int_a^bf(x) d\mu(x) = \sum_{i=1}^n w_i f(x_i)$ 的求积公式,其中 $f $ 属于 $H^1(\mu)$。这里,$\mu$ 属于 $[a, b] \subset \mathbb{R}$ 上的一类连续概率分布,$\sum_{i=1}^n w_i \delta_{x_i}$ 是离散的$[a, b]$ 上的概率分布。我们证明 $H^1(\mu)$ 是具有连续内核 $K$ 的再现内核希尔伯特空间,它允许将正交问题重新表述为贝叶斯(或内核)正交问题。虽然 $K$ 通常不是一个简单的封闭形式,但我们在它的谱分解和与 Poincar\'e 不等式相关的谱分解之间建立了对应关系,Poincar\'e 不等式的共同特征函数形成了 $T$-系统 (Karlin and Studden, 1966)。然后可以在由第一个特征函数跨越的有限维代理空间中解决求积问题。解决方案由广义高斯求积给出,我们称之为 Poincar\'e 求积。我们为 Poincar\'e 正交权重和相关的最坏情况误差推导出了几个结果。当$\mu$ 是均匀分布时,结果是明确的:Poincar\'e 求积等价于中点(矩形)求积法则。它的节点与特征函数的零点重合,对于大的 $n$,最坏情况的误差缩放为 $\frac{ba}{2\sqrt{3}}n^{-1}$。通过与 $H^1(0,1)$ 的已知结果比较,这表明 Poincar\'e 求积是渐近最优的。对于一般的$\mu$,我们提供了一个基于有限元和线性规划的高效数值过程。
更新日期:2022-08-01
down
wechat
bug