当前位置: X-MOL 学术IEEE Trans. Circuit Syst. II Express Briefs › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DeepTempo: A Hardware-Friendly Direct Feedback Alignment Multi-Layer Tempotron Learning Rule for Deep Spiking Neural Networks
IEEE Transactions on Circuits and Systems II: Express Briefs ( IF 4.0 ) Pub Date : 2021-03-04 , DOI: 10.1109/tcsii.2021.3063784
Cong Shi , Tengxiao Wang , Junxian He , Jianghao Zhang , Liyuan Liu , Nanjian Wu

Layer-by-layer error back-propagation (BP) in deep spiking neural networks (SNN) involves complex operations and a high latency. To overcome these problems, we propose a method to efficiently and rapidly train deep SNNs, by extending the well-known single-layer Tempotron learning rule to multiple SNN layers under the Direct Feedback Alignment framework that directly projects output errors onto each hidden layer via a fixed random feedback matrix. A trace-based optimization for Tempotron learning is also proposed. Using such two techniques, our learning process becomes spatiotemporally local and is very plausible for neuromorphic hardware implementations. We applied the proposed hardware-friendly method in training multi-layer and deep SNNs, and obtained comparably high recognition accuracies on the MNIST and ETH-80 datasets.

中文翻译:

DeepTempo:用于深度掺料神经网络的硬件友好型直接反馈对准多层节气门学习规则

深度尖峰神经网络(SNN)中的逐层错误反向传播(BP)涉及复杂的操作和高延迟。为了克服这些问题,我们提出了一种方法,该方法通过在直接反馈对齐框架下将众所周知的单层Tempotron学习规则扩展到多个SNN层,从而通过一个直接将输出错误投射到每个隐藏层上的方法,来快速有效地训练深度SNN。固定随机反馈矩阵。还提出了基于轨迹的Tempotron学习优化。使用这两种技术,我们的学习过程在时空上是局部的,并且对于神经形态硬件实现是非常合理的。我们将提出的硬件友好方法应用于训练多层和深度SNN,并在MNIST和ETH-80数据集上获得了相对较高的识别精度。
更新日期:2021-05-04
down
wechat
bug