当前位置:
X-MOL 学术
›
arXiv.cs.ET
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Sparse Spiking Gradient Descent
arXiv - CS - Emerging Technologies Pub Date : 2021-05-18 , DOI: arxiv-2105.08810 Nicolas Perez-Nieves, Dan F. M. Goodman
arXiv - CS - Emerging Technologies Pub Date : 2021-05-18 , DOI: arxiv-2105.08810 Nicolas Perez-Nieves, Dan F. M. Goodman
There is an increasing interest in emulating Spiking Neural Networks (SNNs)
on neuromorphic computing devices due to their low energy consumption. Recent
advances have allowed training SNNs to a point where they start to compete with
traditional Artificial Neural Networks (ANNs) in terms of accuracy, while at
the same time being energy efficient when run on neuromorphic hardware.
However, the process of training SNNs is still based on dense tensor operations
originally developed for ANNs which do not leverage the spatiotemporally sparse
nature of SNNs. We present here the first sparse SNN backpropagation algorithm
which achieves the same or better accuracy as current state of the art methods
while being significantly faster and more memory efficient. We show the
effectiveness of our method on real datasets of varying complexity
(Fashion-MNIST, Neuromophic-MNIST and Spiking Heidelberg Digits) achieving a
speedup in the backward pass of up to 70x, and 40% more memory efficient,
without losing accuracy.
中文翻译:
稀疏尖峰梯度下降
由于神经形态计算设备的能耗低,人们越来越喜欢在其上模拟尖峰神经网络(SNN)。最近的进展使训练SNN的准确性达到了与传统人工神经网络(ANN)竞争的地步,同时在神经形态硬件上运行时具有能源效率。但是,训练SNN的过程仍基于最初为ANN开发的密集张量运算,该运算不利用SNN的时空稀疏性质。我们在这里介绍了第一种稀疏SNN反向传播算法,该算法可实现与当前技术水平相同或更高的准确性,同时显着提高了速度并提高了存储效率。我们展示了我们的方法在不同复杂度的真实数据集上的有效性(Fashion-MNIST,
更新日期:2021-05-20
中文翻译:
稀疏尖峰梯度下降
由于神经形态计算设备的能耗低,人们越来越喜欢在其上模拟尖峰神经网络(SNN)。最近的进展使训练SNN的准确性达到了与传统人工神经网络(ANN)竞争的地步,同时在神经形态硬件上运行时具有能源效率。但是,训练SNN的过程仍基于最初为ANN开发的密集张量运算,该运算不利用SNN的时空稀疏性质。我们在这里介绍了第一种稀疏SNN反向传播算法,该算法可实现与当前技术水平相同或更高的准确性,同时显着提高了速度并提高了存储效率。我们展示了我们的方法在不同复杂度的真实数据集上的有效性(Fashion-MNIST,