当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Coarse scale representation of spiking neural networks: backpropagation through spikes and application to neuromorphic hardware
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-07-13 , DOI: arxiv-2007.06176
Angel Yanguas-Gil

In this work we explore recurrent representations of leaky integrate and fire neurons operating at a timescale equal to their absolute refractory period. Our coarse time scale approximation is obtained using a probability distribution function for spike arrivals that is homogeneously distributed over this time interval. This leads to a discrete representation that exhibits the same dynamics as the continuous model, enabling efficient large scale simulations and backpropagation through the recurrent implementation. We use this approach to explore the training of deep spiking neural networks including convolutional, all-to-all connectivity, and maxpool layers directly in Pytorch. We found that the recurrent model leads to high classification accuracy using just 4-long spike trains during training. We also observed a good transfer back to continuous implementations of leaky integrate and fire neurons. Finally, we applied this approach to some of the standard control problems as a first step to explore reinforcement learning using neuromorphic chips.

中文翻译:

尖峰神经网络的粗尺度表示:通过尖峰的反向传播和神经形态硬件的应用

在这项工作中,我们探索了在等于其绝对不应期的时间尺度上运行的泄漏整合和激发神经元的循环表示。我们的粗略时间尺度近似是使用峰值到达的概率分布函数获得的,该函数在该时间间隔内均匀分布。这导致离散表示表现出与连续模型相同的动力学,通过循环实现实现高效的大规模模拟和反向传播。我们使用这种方法直接在 Pytorch 中探索深度尖峰神经网络的训练,包括卷积、全对全连接和 maxpool 层。我们发现,循环模型在训练期间仅使用 4 长的尖峰训练即可获得高分类准确度。我们还观察到一个很好的转移回泄漏集成和激发神经元的连续实现。最后,我们将这种方法应用于一些标准控制问题,作为使用神经形态芯片探索强化学习的第一步。
更新日期:2020-07-14
down
wechat
bug