当前位置: X-MOL 学术Neural Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Stimulus-Driven and Spontaneous Dynamics in Excitatory-Inhibitory Recurrent Neural Networks for Sequence Representation
Neural Computation ( IF 2.7 ) Pub Date : 2021-09-16 , DOI: 10.1162/neco_a_01418
Alfred Rajakumar 1 , John Rinzel 2 , Zhe S Chen 3
Affiliation  

Recurrent neural networks (RNNs) have been widely used to model sequential neural dynamics (“neural sequences”) of cortical circuits in cognitive and motor tasks. Efforts to incorporate biological constraints and Dale's principle will help elucidate the neural representations and mechanisms of underlying circuits. We trained an excitatory-inhibitory RNN to learn neural sequences in a supervised manner and studied the representations and dynamic attractors of the trained network. The trained RNN was robust to trigger the sequence in response to various input signals and interpolated a time-warped input for sequence representation. Interestingly, a learned sequence can repeat periodically when the RNN evolved beyond the duration of a single sequence. The eigenspectrum of the learned recurrent connectivity matrix with growing or damping modes, together with the RNN's nonlinearity, were adequate to generate a limit cycle attractor. We further examined the stability of dynamic attractors while training the RNN to learn two sequences. Together, our results provide a general framework for understanding neural sequence representation in the excitatory-inhibitory RNN.



中文翻译:


用于序列表示的兴奋-抑制循环神经网络中的刺激驱动和自发动力学



循环神经网络(RNN)已被广泛用于对认知和运动任务中的皮质回路的顺序神经动力学(“神经序列”)进行建模。将生物学限制和戴尔原理结合起来的努力将有助于阐明潜在电路的神经表征和机制。我们训练了一个兴奋-抑制 RNN 以监督方式学习神经序列,并研究了训练网络的表示和动态吸引子。经过训练的 RNN 能够鲁棒地触发序列来响应各种输入信号,并插入时间扭曲输入来表示序列。有趣的是,当 RNN 进化超过单个序列的持续时间时,学习的序列可以周期性地重复。具有增长或阻尼模式的学习循环连接矩阵的特征谱以及 RNN 的非线性足以生成极限环吸引子。我们在训练 RNN 学习两个序列时进一步检查了动态吸引子的稳定性。总之,我们的结果为理解兴奋-抑制 RNN 中的神经序列表示提供了一个通用框架。

更新日期:2021-09-17
down
wechat
bug