当前位置: X-MOL 学术Trans. Am. Math. Soc. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Random sections of ellipsoids and the power of random information
Transactions of the American Mathematical Society ( IF 1.2 ) Pub Date : 2021-09-16 , DOI: 10.1090/tran/8502
Aicke Hinrichs , David Krieg , Erich Novak , Joscha Prochno , Mario Ullrich

Abstract:We study the circumradius of the intersection of an $m$-dimensional ellipsoid $\mathcal E$ with semi-axes $\sigma _1\geq \dots \geq \sigma _m$ with random subspaces of codimension $n$, where $n$ can be much smaller than $m$. We find that, under certain assumptions on $\sigma$, this random radius $\mathcal {R}_n=\mathcal {R}_n(\sigma )$ is of the same order as the minimal such radius $\sigma _{n+1}$ with high probability. In other situations $\mathcal {R}_n$ is close to the maximum $\sigma _1$. The random variable $\mathcal {R}_n$ naturally corresponds to the worst-case error of the best algorithm based on random information for $L_2$-approximation of functions from a compactly embedded Hilbert space $H$ with unit ball $\mathcal E$. In particular, $\sigma _k$ is the $k$th largest singular value of the embedding $H\hookrightarrow L_2$. In this formulation, one can also consider the case $m=\infty$ and we prove that random information behaves very differently depending on whether $\sigma \in \ell _2$ or not. For $\sigma \notin \ell _2$ we get $\mathbb {E}[\mathcal {R}_n] = \sigma _1$ and random information is completely useless. For $\sigma \in \ell _2$ the expected radius tends to zero at least at rate $o(1/\sqrt {n})$ as $n\to \infty$. In the important case \[ \sigma _k \asymp k^{-\alpha } \ln ^{-\beta }(k+1), \] where $\alpha > 0$ and $\beta \in \mathbb {R}$ (which corresponds to various Sobolev embeddings), we prove \begin{equation*} \mathbb E [\mathcal {R}_n(\sigma )] \asymp \left \{\begin {array}{cl} \sigma _1 & \text {if} \quad \alpha <1/2 \text {\, or \,} \beta \leq \alpha =1/2, {2mm} \\ \sigma _{n+1} \, \sqrt {\ln (n+1)} \quad & \text {if} \quad \beta >\alpha =1/2, {2mm} \\ \sigma _{n+1} & \text {if} \quad \alpha >1/2. \end{array}\right . \end{equation*} In the proofs we use a comparison result for Gaussian processes à la Gordon, exponential estimates for sums of chi-squared random variables, and estimates for the extreme singular values of (structured) Gaussian random matrices. The upper bound is constructive. It is proven for the worst case error of a least squares estimator.


中文翻译:

椭球的随机部分和随机信息的力量

摘要:我们研究了 $m$ 维椭球 $\mathcal E$ 与半轴 $\sigma _1\geq \dots \geq \sigma _m$ 与随机子空间 $n$ 的交集的圆周半径,其中$n$ 可以远小于 $m$。我们发现,在 $\sigma$ 的某些假设下,这个随机半径 $\mathcal {R}_n=\mathcal {R}_n(\sigma )$ 与最小这样的半径 $\sigma _{ 同阶n+1}$ 的概率很高。在其他情况下,$\mathcal {R}_n$ 接近最大值 $\sigma _1$。随机变量 $\mathcal {R}_n$ 自然对应于基于随机信息的最佳算法的最坏情况误差,用于 $L_2$ 的函数近似值来自具有单位球 $\mathcal 的紧凑嵌入 Hilbert 空间 $H$元。特别地,$\sigma _k$ 是嵌入 $H\hookrightarrow L_2$ 的第 $k$ 个最大奇异值。在这个公式中,我们还可以考虑 $m=\infty$ 的情况,我们证明随机信息的行为非常不同,这取决于 $\sigma \in \ell _2$ 与否。对于 $\sigma \notin \ell _2$ 我们得到 $\mathbb {E}[\mathcal {R}_n] = \sigma _1$ 并且随机信息完全没有用。对于 $\sigma \in \ell _2$,预期半径至少在 $o(1/\sqrt {n})$ 和 $n\to \infty$ 的比率下趋于零。在重要情况下 \[ \sigma _k \asymp k^{-\alpha } \ln ^{-\beta }(k+1), \] 其中 $\alpha > 0$ 和 $\beta \in \mathbb { R}$(对应于各种 Sobolev 嵌入),我们证明 \begin{equation*} \mathbb E [\mathcal {R}_n(\sigma )] \asymp \left \{\begin {array}{cl} \ sigma _1 & \text {if} \quad \alpha <1/2 \text {\, or \,} \beta \leq \alpha =1/2, {2mm} \\ \sigma _{n+1} \ , \sqrt {\ln (n+1)} \quad & \text {if} \quad \beta > \alpha =1/2, {2mm} \\ \sigma _{n+1} & \text {if} \quad \alpha >1/2。\end{array}\right 。\end{equation*} 在证明中,我们使用高斯过程 à la Gordon 的比较结果、卡方随机变量总和的指数估计以及(结构化)高斯随机矩阵的极端奇异值的估计。上限是建设性的。证明了最小二乘估计器的最坏情况误差。
更新日期:2021-11-09
down
wechat
bug