当前位置: X-MOL 学术Res. Math. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Kolmogorov width decay and poor approximators in machine learning: shallow neural networks, random feature models and neural tangent kernels
Research in the Mathematical Sciences ( IF 1.2 ) Pub Date : 2021-01-05 , DOI: 10.1007/s40687-020-00233-4
Weinan E , Stephan Wojtowytsch

We establish a scale separation of Kolmogorov width type between subspaces of a given Banach space under the condition that a sequence of linear maps converges much faster on one of the subspaces. The general technique is then applied to show that reproducing kernel Hilbert spaces are poor \(L^{2}\)-approximators for the class of two-layer neural networks in high dimension, and that multi-layer networks with small path norm are poor approximators for certain Lipschitz functions, also in the \(L^{2}\)-topology.



中文翻译:

机器学习中的Kolmogorov宽度衰减和近似值差:浅层神经网络,随机特征模型和神经正切核

我们在给定Banach空间的子空间之间建立Kolmogorov宽度类型的尺度分离,条件是线性映射序列在一个子空间上收敛得更快。然后应用通用技术表明,对于高维两层神经网络,再现核希尔伯特空间的穷人\(L ^ {2} \) -逼近器,并且具有小路径范数的多层网络某些Lipschitz函数(在\(L ^ {2} \)-拓扑中)的近似值也很差。

更新日期:2021-01-05
down
wechat
bug