当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Pruning of Deep Spiking Neural Networks through Gradient Rewiring
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2021-05-11 , DOI: arxiv-2105.04916
Yanqi Chen, Zhaofei Yu, Wei Fang, Tiejun Huang, Yonghong Tian

Spiking Neural Networks (SNNs) have been attached great importance due to their biological plausibility and high energy-efficiency on neuromorphic chips. As these chips are usually resource-constrained, the compression of SNNs is thus crucial along the road of practical use of SNNs. Most existing methods directly apply pruning approaches in artificial neural networks (ANNs) to SNNs, which ignore the difference between ANNs and SNNs, thus limiting the performance of the pruned SNNs. Besides, these methods are only suitable for shallow SNNs. In this paper, inspired by synaptogenesis and synapse elimination in the neural system, we propose gradient rewiring (Grad R), a joint learning algorithm of connectivity and weight for SNNs, that enables us to seamlessly optimize network structure without retrain. Our key innovation is to redefine the gradient to a new synaptic parameter, allowing better exploration of network structures by taking full advantage of the competition between pruning and regrowth of connections. The experimental results show that the proposed method achieves minimal loss of SNNs' performance on MNIST and CIFAR-10 dataset so far. Moreover, it reaches a $\sim$3.5% accuracy loss under unprecedented 0.73% connectivity, which reveals remarkable structure refining capability in SNNs. Our work suggests that there exists extremely high redundancy in deep SNNs. Our codes are available at \url{https://github.com/Yanqi-Chen/Gradient-Rewiring}.

中文翻译:

通过梯度重新布线修剪深尖峰神经网络

尖刺神经网络(SNN)由于其生物学上的合理性和对神经形态芯片的高能量效率而受到高度重视。由于这些芯片通常受资源限制,因此在实际使用SNN的过程中,SNN的压缩至关重要。现有的大多数方法都将人工神经网络(ANN)中的修剪方法直接应用于SNN,而忽略了ANN和SNN之间的差异,从而限制了修剪后的SNN的性能。此外,这些方法仅适用于浅层SNN。在本文中,受神经系统中突触形成和突触消除的启发,我们提出了梯度重布线(Grad R),这是SNN连通性和权重的联合学习算法,使我们能够无缝优化网络结构而无需重新训练。我们的关键创新是将梯度重新定义为新的突触参数,从而通过充分利用连接的修剪和再生长之间的竞争来更好地探索网络结构。实验结果表明,该方法到目前为止在MNIST和CIFAR-10数据集上实现了最小的SNN性能损失。而且,在前所未有的0.73%的连通性下,它达到了sim \ 3.5%的精度损失,这表明SNN具有出色的结构细化能力。我们的工作表明,深度SNN中存在极高的冗余度。我们的代码位于\ url {https://github.com/Yanqi-Chen/Gradient-Rewiring}。实验结果表明,该方法到目前为止在MNIST和CIFAR-10数据集上实现了最小的SNN性能损失。而且,在前所未有的0.73%的连通性下,它达到了sim \ 3.5%的精度损失,这表明SNN具有出色的结构细化能力。我们的工作表明,深度SNN中存在极高的冗余度。我们的代码位于\ url {https://github.com/Yanqi-Chen/Gradient-Rewiring}。实验结果表明,该方法到目前为止在MNIST和CIFAR-10数据集上实现了最小的SNN性能损失。而且,在前所未有的0.73%的连通性下,它达到了sim \ 3.5%的精度损失,这表明SNN具有出色的结构细化能力。我们的工作表明,深度SNN中存在极高的冗余度。我们的代码位于\ url {https://github.com/Yanqi-Chen/Gradient-Rewiring}。
更新日期:2021-05-12
down
wechat
bug