当前位置: X-MOL 学术Nat. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Teaching recurrent neural networks to infer global temporal structure from local examples
Nature Machine Intelligence ( IF 23.8 ) Pub Date : 2021-04-19 , DOI: 10.1038/s42256-021-00321-2
Jason Z. Kim , Zhixin Lu , Erfan Nozari , George J. Pappas , Danielle S. Bassett

The ability to store and manipulate information is a hallmark of computational systems. Whereas computers are carefully engineered to represent and perform mathematical operations on structured data, neurobiological systems adapt to perform analogous functions without needing to be explicitly engineered. Recent efforts have made progress in modelling the representation and recall of information in neural systems. However, precisely how neural systems learn to modify these representations remains far from understood. Here, we demonstrate that a recurrent neural network (RNN) can learn to modify its representation of complex information using only examples, and we explain the associated learning mechanism with new theory. Specifically, we drive an RNN with examples of translated, linearly transformed or pre-bifurcated time series from a chaotic Lorenz system, alongside an additional control signal that changes value for each example. By training the network to replicate the Lorenz inputs, it learns to autonomously evolve about a Lorenz-shaped manifold. Additionally, it learns to continuously interpolate and extrapolate the translation, transformation and bifurcation of this representation far beyond the training data by changing the control signal. Furthermore, we demonstrate that RNNs can infer the bifurcation structure of normal forms and period doubling routes to chaos, and extrapolate non-dynamical, kinematic trajectories. Finally, we provide a mechanism for how these computations are learned, and replicate our main results using a Wilson–Cowan reservoir. Together, our results provide a simple but powerful mechanism by which an RNN can learn to manipulate internal representations of complex information, enabling the principled study and precise design of RNNs.



中文翻译:

教授递归神经网络从局部示例中推断全局时间结构

存储和操作信息的能力是计算系统的标志。计算机经过精心设计以表示和执行结构化数据的数学运算,而神经生物学系统则无需明确设计即可适应执行类似功能。最近的努力在对神经系统中信息的表示和召回进行建模方面取得了进展。然而,神经系统究竟是如何学习修改这些表示的仍然远未理解。在这里,我们证明了循环神经网络 (RNN) 可以仅使用示例来学习修改其对复杂信息的表示,并且我们用新理论解释了相关的学习机制。具体来说,我们使用翻译示例驱动 RNN,来自混沌 Lorenz 系统的线性变换或预分叉时间序列,以及改变每个示例值的附加控制信号。通过训练网络复制 Lorenz 输入,它学会了围绕 Lorenz 形流形自主进化。此外,它通过改变控制信号,学习不断地插值和外推这种表示的翻译、转换和分叉,远远超出训练数据。此外,我们证明了 RNN 可以推断出正常形式的分叉结构和通向混沌的倍周期路线,并推断出非动态的运动轨迹。最后,我们提供了如何学习这些计算的机制,并使用 Wilson-Cowan 水库复制我们的主要结果。一起,

更新日期:2021-04-19
down
wechat
bug