当前位置:
X-MOL 学术
›
arXiv.cs.IT
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Metric Entropy Limits on Recurrent Neural Network Learning of Linear Dynamical Systems
arXiv - CS - Information Theory Pub Date : 2021-05-06 , DOI: arxiv-2105.02556 Clemens Hutter, Recep Gül, Helmut Bölcskei
arXiv - CS - Information Theory Pub Date : 2021-05-06 , DOI: arxiv-2105.02556 Clemens Hutter, Recep Gül, Helmut Bölcskei
One of the most influential results in neural network theory is the universal
approximation theorem [1, 2, 3] which states that continuous functions can be
approximated to within arbitrary accuracy by single-hidden-layer feedforward
neural networks. The purpose of this paper is to establish a result in this
spirit for the approximation of general discrete-time linear dynamical systems
- including time-varying systems - by recurrent neural networks (RNNs). For the
subclass of linear time-invariant (LTI) systems, we devise a quantitative
version of this statement. Specifically, measuring the complexity of the
considered class of LTI systems through metric entropy according to [4], we
show that RNNs can optimally learn - or identify in system-theory parlance -
stable LTI systems. For LTI systems whose input-output relation is
characterized through a difference equation, this means that RNNs can learn the
difference equation from input-output traces in a metric-entropy optimal
manner.
中文翻译:
线性动力系统递归神经网络学习的度量熵极限
神经网络理论中最有影响力的结果之一是通用逼近定理[1、2、3],该定理指出,单隐藏层前馈神经网络可以将连续函数逼近任意精度。本文的目的是在这种精神下建立一个结果,用于通过递归神经网络(RNN)逼近一般离散时间线性动力系统-包括时变系统。对于线性时不变(LTI)系统的子类,我们设计了此声明的定量版本。具体来说,根据[4]通过度量熵来衡量所考虑的LTI系统类别的复杂性,我们表明RNN可以最佳地学习-或以系统理论的话来识别-稳定的LTI系统。
更新日期:2021-05-07
中文翻译:
线性动力系统递归神经网络学习的度量熵极限
神经网络理论中最有影响力的结果之一是通用逼近定理[1、2、3],该定理指出,单隐藏层前馈神经网络可以将连续函数逼近任意精度。本文的目的是在这种精神下建立一个结果,用于通过递归神经网络(RNN)逼近一般离散时间线性动力系统-包括时变系统。对于线性时不变(LTI)系统的子类,我们设计了此声明的定量版本。具体来说,根据[4]通过度量熵来衡量所考虑的LTI系统类别的复杂性,我们表明RNN可以最佳地学习-或以系统理论的话来识别-稳定的LTI系统。