当前位置: X-MOL 学术Stat. Anal. Data Min. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Approximation error of Fourier neural networks
Statistical Analysis and Data Mining ( IF 2.1 ) Pub Date : 2021-03-23 , DOI: 10.1002/sam.11506
Abylay Zhumekenov 1 , Rustem Takhanov 2 , Alejandro J. Castro 2 , Zhenisbek Assylbekov 2
Affiliation  

The paper investigates approximation error of two‐layer feedforward Fourier Neural Networks (FNNs). Such networks are motivated by the approximation properties of Fourier series. Several implementations of FNNs were proposed since 1980s: by Gallant and White, Silvescu, Tan, Zuo and Cai, and Liu. The main focus of our work is Silvescu's FNN, because its activation function does not fit into the category of networks, where the linearly transformed input is exposed to activation. The latter ones were extensively described by Hornik. In regard to non‐trivial Silvescu's FNN, its convergence rate is proven to be of order O(1/n). The paper continues investigating classes of functions approximated by Silvescu FNN, which appeared to be from Schwartz space and space of positive definite functions.

中文翻译:

傅立叶神经网络的逼近误差

本文研究了两层前馈傅里叶神经网络(FNN)的近似误差。这样的网络是由傅里叶级数的逼近性质引起的。自1980年代以来,已经提出了FNN的几种实现方式:Gallant和White,Silvescu,Tan,Zuo和Cai和Liu。我们工作的主要重点是Silvescu的FNN,因为其激活函数不适合网络类别,在线性网络中,线性变换后的输入会受到激活。Hornik对后者进行了广泛的描述。对于非平凡的Silvescu的FNN,其收敛速度证明为O(1 / n阶。。本文继续研究Silvescu FNN近似的函数类,这些类似乎来自Schwartz空间和正定函数空间。
更新日期:2021-05-04
down
wechat
bug