当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Approximation rates for neural networks with encodable weights in smoothness spaces
Neural Networks ( IF 6.0 ) Pub Date : 2020-11-27 , DOI: 10.1016/j.neunet.2020.11.010
Ingo Gühring , Mones Raslan

We examine the necessary and sufficient complexity of neural networks to approximate functions from different smoothness spaces under the restriction of encodable network weights. Based on an entropy argument, we start by proving lower bounds for the number of nonzero encodable weights for neural network approximation in Besov spaces, Sobolev spaces and more. These results are valid for all sufficiently smooth activation functions. Afterwards, we provide a unifying framework for the construction of approximate partitions of unity by neural networks with fairly general activation functions. This allows us to approximate localized Taylor polynomials by neural networks and make use of the Bramble–Hilbert Lemma. Based on our framework, we derive almost optimal upper bounds in higher-order Sobolev norms. This work advances the theory of approximating solutions of partial differential equations by neural networks.



中文翻译:

光滑空间中可编码权重的神经网络的逼近率

我们研究了在可编码的网络权重的限制下,神经网络的必要性和足够的复杂性,以从不同的平滑空间近似函数。基于熵参数,我们首先证明Besov空间,Sobolev空间等中神经网络逼近的非零可编码权重数量的下界。这些结果对于所有足够平滑的激活函数均有效。之后,我们提供了一个统一的框架,用于通过具有相当通用的激活函数的神经网络构造统一的近似分区。这使我们可以通过神经网络近似本地化的泰勒多项式,并利用Bramble-Hilbert Lemma。基于我们的框架,我们推导了高阶Sobolev范数中的几乎最佳上限。

更新日期:2020-12-09
down
wechat
bug