当前位置: X-MOL 学术Phys. Rev. Research › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Quality of internal representation shapes learning performance in feedback neural networks
Physical Review Research ( IF 3.5 ) Pub Date : 2021-02-23 , DOI: 10.1103/physrevresearch.3.013176
Lee Susman , Francesca Mastrogiuseppe , Naama Brenner , Omri Barak

A fundamental feature of complex biological systems is the ability to form feedback interactions with their environment. A prominent model for studying such interactions is reservoir computing, where learning acts on low-dimensional bottlenecks. Despite the simplicity of this learning scheme, the factors contributing to or hindering the success of training in reservoir networks are in general not well understood. In this work, we study nonlinear feedback networks trained to generate a sinusoidal signal, and analyze how learning performance is shaped by the interplay between internal network dynamics and target properties. By performing exact mathematical analysis of linearized networks, we predict that learning performance is maximized when the target is characterized by an optimal, intermediate frequency which monotonically decreases with the strength of the internal reservoir connectivity. At the optimal frequency, the reservoir representation of the target signal is high-dimensional, desynchronized, and thus maximally robust to noise. We show that our predictions successfully capture the qualitative behavior of performance in nonlinear networks. Moreover, we find that the relationship between internal representations and performance can be further exploited in trained nonlinear networks to explain behaviors which do not have a linear counterpart. Our results indicate that a major determinant of learning success is the quality of the internal representation of the target, which in turn is shaped by an interplay between parameters controlling the internal network and those defining the task.

中文翻译:

内部表示的质量影响反馈神经网络的学习性能

复杂生物系统的基本特征是能够与其环境形成反馈相互作用。研究这种相互作用的一个著名模型是储层计算,其中学习作用于低维瓶颈。尽管该学习方案很简单,但对储层网络中的训练成功或有阻碍的因素总体上还是不太清楚。在这项工作中,我们研究经过训练以生成正弦信号的非线性反馈网络,并分析内部网络动力学和目标属性之间的相互作用如何影响学习性能。通过对线性化网络进行精确的数学分析,我们预测,当目标具有最佳特征时,学习性能将得到最大化,中频随内部储层连通性强度单调降低。在最佳频率下,目标信号的储层表示是高维的,去同步的,因此对噪声具有最大的鲁棒性。我们表明,我们的预测成功地捕获了非线性网络中性能的定性行为。此外,我们发现内部表示和性能之间的关系可以在经过训练的非线性网络中进一步利用,以解释没有线性对应关系的行为。我们的结果表明,学习成功的主要决定因素是目标内部表示的质量,而目标内部表示的质量又取决于控制内部网络的参数与定义任务的参数之间的相互作用。
更新日期:2021-02-23
down
wechat
bug