当前位置: X-MOL 学术J. Math. Anal. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Negative results for approximation using single layer and multilayer feedforward neural networks
Journal of Mathematical Analysis and Applications ( IF 1.3 ) Pub Date : 2021-02-01 , DOI: 10.1016/j.jmaa.2020.124584
J.M. Almira , P.E. Lopez-de-Teruel , D.J. Romero-López , F. Voigtlaender

We prove a negative result for the approximation of functions defined on compact subsets of $\mathbb{R}^d$ (where $d \geq 2$) using feedforward neural networks with one hidden layer and arbitrary continuous activation function. In a nutshell, this result claims the existence of target functions that are as difficult to approximate using these neural networks as one may want. We also demonstrate an analogous result (for general $d \in \mathbb{N}$) for neural networks with an \emph{arbitrary} number of hidden layers, for activation functions that are either rational functions or continuous splines with finitely many pieces.

中文翻译:

使用单层和多层前馈神经网络逼近的否定结果

我们使用具有一个隐藏层和任意连续激活函数的前馈神经网络证明了在 $\mathbb{R}^d$(其中 $d\geq 2$)的紧凑子集上定义的函数的近似结果为负结果。简而言之,这一结果表明存在使用这些神经网络难以近似的目标函数,正如人们可能想要的那样。我们还展示了一个类似的结果(对于一般 $d \in \mathbb{N}$),对于具有 \emph{arbitrary} 隐藏层数的神经网络,对于有理函数或具有有限多块的连续样条的激活函数.
更新日期:2021-02-01
down
wechat
bug