当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2021-07-25 , DOI: arxiv-2107.11746
Ling Liang, Zheng Qu, Zhaodong Chen, Fengbin Tu, Yujie Wu, Lei Deng, Guoqi Li, Peng Li, Yuan Xie

Although spiking neural networks (SNNs) take benefits from the bio-plausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks. Recently, an emerging SNN supervised learning algorithm inspired by backpropagation through time (BPTT) from the domain of artificial neural networks (ANNs) has successfully boosted the accuracy of SNNs and helped improve the practicability of SNNs. However, current general-purpose processors suffer from low efficiency when performing BPTT for SNNs due to the ANN-tailored optimization. On the other hand, current neuromorphic chips cannot support BPTT because they mainly adopt local synaptic plasticity rules for simplified implementation. In this work, we propose H2Learn, a novel architecture that can achieve high efficiency for BPTT-based SNN learning which ensures high accuracy of SNNs. At the beginning, we characterized the behaviors of BPTT-based SNN learning. Benefited from the binary spike-based computation in the forward pass and the weight update, we first design lookup table (LUT) based processing elements in Forward Engine and Weight Update Engine to make accumulations implicit and to fuse the computations of multiple input points. Second, benefited from the rich sparsity in the backward pass, we design a dual-sparsity-aware Backward Engine which exploits both input and output sparsity. Finally, we apply a pipeline optimization between different engines to build an end-to-end solution for the BPTT-based SNN learning. Compared with the modern NVIDIA V100 GPU, H2Learn achieves 7.38x area saving, 5.74-10.20x speedup, and 5.25-7.12x energy saving on several benchmark datasets.

中文翻译:

H2Learn:用于高精度尖峰神经网络的高效学习加速器

尽管尖峰神经网络(SNN)受益于生物似是而非的神经建模,但常见的局部突触可塑性学习规则下的低准确度限制了它们在许多实际任务中的应用。最近,一种受人工神经网络 (ANN) 领域的时间反向传播 (BPTT) 启发的新兴 SNN 监督学习算法成功提高了 SNN 的准确性,并有助于提高 SNN 的实用性。然而,由于 ANN 定制的优化,当前的通用处理器在为 SNN 执行 BPTT 时效率低下。另一方面,目前的神经形态芯片无法支持BPTT,因为它们主要采用局部突触可塑性规则来简化实现。在这项工作中,我们提出 H2Learn,一种新颖的架构,可以实现基于 BPTT 的 SNN 学习的高效率,确保 SNN 的高精度。一开始,我们描述了基于 BPTT 的 SNN 学习的行为。受益于前向传递和权重更新中基于二进制尖峰的计算,我们首先在前向引擎和权重更新引擎中设计基于查找表 (LUT) 的处理元素,使累积隐式并融合多个输入点的计算。其次,受益于反向传播中丰富的稀疏性,我们设计了一个双稀疏感知反向引擎,它利用了输入和输出稀疏性。最后,我们在不同引擎之间应用管道优化,为基于 BPTT 的 SNN 学习构建端到端解决方案。与现代 NVIDIA V100 GPU 相比,H2Learn 实现了 7.38 倍的面积节省,5.74-10。
更新日期:2021-07-27
down
wechat
bug