当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Progressive Tandem Learning for Pattern Recognition with Deep Spiking Neural Networks
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-07-02 , DOI: arxiv-2007.01204
Jibin Wu, Chenglin Xu, Daquan Zhou, Haizhou Li, Kay Chen Tan

Spiking neural networks (SNNs) have shown clear advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency, due to their event-driven nature and sparse communication. However, the training of deep SNNs is not straightforward. In this paper, we propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition, which is referred to as progressive tandem learning of deep SNNs. By studying the equivalence between ANNs and SNNs in the discrete representation space, a primitive network conversion method is introduced that takes full advantage of spike count to approximate the activation value of analog neurons. To compensate for the approximation errors arising from the primitive network conversion, we further introduce a layer-wise learning method with an adaptive training scheduler to fine-tune the network weights. The progressive tandem learning framework also allows hardware constraints, such as limited weight precision and fan-in connections, to be progressively imposed during training. The SNNs thus trained have demonstrated remarkable classification and regression capabilities on large-scale object recognition, image reconstruction, and speech separation tasks, while requiring at least an order of magnitude reduced inference time and synaptic operations than other state-of-the-art SNN implementations. It, therefore, opens up a myriad of opportunities for pervasive mobile and embedded devices with a limited power budget.

中文翻译:

使用深度尖峰神经网络进行模式识别的渐进式串联学习

尖峰神经网络 (SNN) 由于其事件驱动的性质和稀疏通信,在低延迟和高计算效率方面比传统人工神经网络 (ANN) 显示出明显的优势。然而,深度 SNN 的训练并不简单。在本文中,我们提出了一种新颖的 ANN 到 SNN 转换和分层学习框架,用于快速有效的模式识别,称为深度 SNN 的渐进式串联学习。通过研究ANNs和SNNs在离散表示空间中的等价性,引入了一种原始网络转换方法,充分利用尖峰计数来逼近模拟神经元的激活值。为了补偿原始网络转换产生的近似误差,我们进一步引入了一种具有自适应训练调度程序的分层学习方法来微调网络权重。渐进式串联学习框架还允许在训练期间逐步施加硬件约束,例如有限的权重精度和扇入连接。如此训练的 SNN 在大规模对象识别、图像重建和语音分离任务上展示了卓越的分类和回归能力,同时需要比其他最先进的 SNN 至少减少一个数量级的推理时间和突触操作实现。因此,它为功率预算有限的普遍移动和嵌入式设备开辟了无数机会。渐进式串联学习框架还允许在训练期间逐步施加硬件约束,例如有限的权重精度和扇入连接。如此训练的 SNN 在大规模对象识别、图像重建和语音分离任务上展示了卓越的分类和回归能力,同时需要比其他最先进的 SNN 至少减少一个数量级的推理时间和突触操作实现。因此,它为功率预算有限的普遍移动和嵌入式设备开辟了无数机会。渐进式串联学习框架还允许在训练期间逐步施加硬件约束,例如有限的权重精度和扇入连接。如此训练的 SNN 在大规模对象识别、图像重建和语音分离任务上展示了卓越的分类和回归能力,同时需要比其他最先进的 SNN 至少减少一个数量级的推理时间和突触操作实现。因此,它为功率预算有限的普遍移动和嵌入式设备开辟了无数机会。如此训练的 SNN 在大规模对象识别、图像重建和语音分离任务上展示了卓越的分类和回归能力,同时需要比其他最先进的 SNN 至少减少一个数量级的推理时间和突触操作实现。因此,它为功率预算有限的普遍移动和嵌入式设备开辟了无数机会。如此训练的 SNN 在大规模对象识别、图像重建和语音分离任务上展示了卓越的分类和回归能力,同时需要比其他最先进的 SNN 至少减少一个数量级的推理时间和突触操作实现。因此,它为功率预算有限的普遍移动和嵌入式设备开辟了无数机会。
更新日期:2020-07-03
down
wechat
bug