当前位置: X-MOL 学术Results Math. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On Sharpness of Error Bounds for Univariate Approximation by Single Hidden Layer Feedforward Neural Networks
Results in Mathematics ( IF 1.1 ) Pub Date : 2020-07-01 , DOI: 10.1007/s00025-020-01239-8
Steffen Goebbels

A new non-linear variant of a quantitative extension of the uniform boundedness principle is used to show sharpness of error bounds for univariate approximation by sums of sigmoid and ReLU functions. Single hidden layer feedforward neural networks with one input node perform such operations. Errors of best approximation can be expressed using moduli of smoothness of the function to be approximated (i.e., to be learned). In this context, the quantitative extension of the uniform boundedness principle indeed allows to construct counterexamples that show approximation rates to be best possible. Approximation errors do not belong to the little-o class of given bounds. By choosing piecewise linear activation functions, the discussed problem becomes free knot spline approximation. Results of the present paper also hold for non-polynomial (and not piecewise defined) activation functions like inverse tangent. Based on Vapnik–Chervonenkis dimension, first results are shown for the logistic function.

中文翻译:

单隐层前馈神经网络单变量逼近误差界限的锐度

统一有界原理的定量扩展的新非线性变体用于通过 sigmoid 和 ReLU 函数的总和显示单变量逼近的误差界限的锐度。具有一个输入节点的单隐藏层前馈神经网络执行此类操作。可以使用待逼近(即待学习)函数的平滑度模数来表示最佳逼近的误差。在这种情况下,均匀有界原理的定量扩展确实允许构建反例,以显示尽可能最好的近似率。近似误差不属于给定边界的 little-o 类。通过选择分段线性激活函数,讨论的问题变成了自由结样条近似。本文的结果也适用于非多项式(而非分段定义)激活函数,如反正切。基于 Vapnik-Chervonenkis 维度,显示了逻辑函数的第一个结果。
更新日期:2020-07-01
down
wechat
bug