当前位置: X-MOL 学术arXiv.cs.NA › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On the worst-case error of least squares algorithms for $L_2$-approximation with high probability
arXiv - CS - Numerical Analysis Pub Date : 2020-03-25 , DOI: arxiv-2003.11947
Mario Ullrich

It was recently shown in [4] that, for $L_2$-approximation of functions from a Hilbert space, function values are almost as powerful as arbitrary linear information, if the approximation numbers are square-summable. That is, we showed that \[ e_n \,\lesssim\, \sqrt{\frac{1}{k_n} \sum_{j\geq k_n} a_j^2} \qquad \text{ with }\quad k_n \asymp \frac{n}{\ln(n)}, \] where $e_n$ are the sampling numbers and $a_k$ are the approximation numbers. In particular, if $(a_k)\in\ell_2$, then $e_n$ and $a_n$ are of the same polynomial order. For this, we presented an explicit (weighted least squares) algorithm based on i.i.d. random points and proved that this works with positive probability. This implies the existence of a good deterministic sampling algorithm. Here, we present a modification of the proof in [4] that shows that the same algorithm works with probability at least $1-{n^{-c}}$ for all $c>0$.

中文翻译:

关于大概率$L_2$-近似的最小二乘算法的最坏情况误差

最近在 [4] 中表明,对于来自希尔伯特空间的函数的 $L_2$ 逼近,如果逼近数是平方和的,函数值几乎与任意线性信息一样强大。也就是说,我们证明了 \[ e_n \,\lesssim\, \sqrt{\frac{1}{k_n} \sum_{j\geq k_n} a_j^2} \qquad \text{ with }\quad k_n \asymp \frac{n}{\ln(n)}, \] 其中 $e_n$ 是采样数,$a_k$ 是近似数。特别地,如果 $(a_k)\in\ell_2$,则 $e_n$ 和 $a_n$ 具有相同的多项式阶数。为此,我们提出了一种基于 iid 随机点的显式(加权最小二乘法)算法,并证明该算法具有正概率。这意味着存在良好的确定性采样算法。这里,
更新日期:2020-03-27
down
wechat
bug