当前位置: X-MOL 学术Int. J. Neural Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Temporal Backpropagation for Spiking Neural Networks with One Spike per Neuron
International Journal of Neural Systems ( IF 8 ) Pub Date : 2020-03-17 , DOI: 10.1142/s0129065720500276
Saeed Reza Kheradpisheh 1 , Timothée Masquelier 2
Affiliation  

We propose a new supervised learning rule for multilayer spiking neural networks (SNNs) that use a form of temporal coding known as rank-order-coding. With this coding scheme, all neurons fire exactly one spike per stimulus, but the firing order carries information. In particular, in the readout layer, the first neuron to fire determines the class of the stimulus. We derive a new learning rule for this sort of network, named S4NN, akin to traditional error backpropagation, yet based on latencies. We show how approximated error gradients can be computed backward in a feedforward network with any number of layers. This approach reaches state-of-the-art performance with supervised multi-fully connected layer SNNs: test accuracy of 97.4% for the MNIST dataset, and 99.2% for the Caltech Face/Motorbike dataset. Yet, the neuron model that we use, nonleaky integrate-and-fire, is much simpler than the one used in all previous works. The source codes of the proposed S4NN are publicly available at https://github.com/SRKH/S4NN .

中文翻译:

每个神经元一个脉冲的脉冲神经网络的时间反向传播

我们为多层脉冲神经网络 (SNN) 提出了一种新的监督学习规则,该规则使用一种称为排序编码的时间编码形式。使用这种编码方案,所有神经元每次刺激都会发射一个脉冲,但发射顺序会携带信息。特别是,在读出层中,第一个触发的神经元决定了刺激的类别。我们为这种网络推导出了一个新的学习规则,称为 S4NN,类似于传统的错误反向传播,但基于延迟。我们展示了如何在具有任意层数的前馈网络中向后计算近似误差梯度。这种方法通过有监督的多完全连接层 SNN 达到了最先进的性能:MNIST 数据集的测试准确率为 97.4%,加州理工学院人脸/摩托车数据集的测试准确率为 99.2%。然而,我们使用的神经元模型,nonleaky integration-and-fire,比以前所有作品中使用的要简单得多。提议的 S4NN 的源代码可在 https://github.com/SRKH/S4NN 上公开获得。
更新日期:2020-03-17
down
wechat
bug