当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Training Energy-Efficient Deep Spiking Neural Networks with Single-Spike Hybrid Input Encoding
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2021-07-26 , DOI: arxiv-2107.12374
Gourav Datta, Souvik Kundu, Peter A. Beerel

Spiking Neural Networks (SNNs) have emerged as an attractive alternative to traditional deep learning frameworks, since they provide higher computational efficiency in event driven neuromorphic hardware. However, the state-of-the-art (SOTA) SNNs suffer from high inference latency, resulting from inefficient input encoding and training techniques. The most widely used input coding schemes, such as Poisson based rate-coding, do not leverage the temporal learning capabilities of SNNs. This paper presents a training framework for low-latency energy-efficient SNNs that uses a hybrid encoding scheme at the input layer in which the analog pixel values of an image are directly applied during the first timestep and a novel variant of spike temporal coding is used during subsequent timesteps. In particular, neurons in every hidden layer are restricted to fire at most once per image which increases activation sparsity. To train these hybrid-encoded SNNs, we propose a variant of the gradient descent based spike timing dependent back propagation (STDB) mechanism using a novel cross entropy loss function based on both the output neurons' spike time and membrane potential. The resulting SNNs have reduced latency and high activation sparsity, yielding significant improvements in computational efficiency. In particular, we evaluate our proposed training scheme on image classification tasks from CIFAR-10 and CIFAR-100 datasets on several VGG architectures. We achieve top-1 accuracy of $66.46$\% with $5$ timesteps on the CIFAR-100 dataset with ${\sim}125\times$ less compute energy than an equivalent standard ANN. Additionally, our proposed SNN performs $5$-$300\times$ faster inference compared to other state-of-the-art rate or temporally coded SNN models.

中文翻译:

使用单尖峰混合输入编码训练节能深度尖峰神经网络

尖峰神经网络 (SNN) 已成为传统深度学习框架的一种有吸引力的替代方案,因为它们在事件驱动的神经形态硬件中提供了更高的计算效率。然而,最先进的 (SOTA) SNN 受到高推理延迟的影响,这是由于输入编码和训练技术效率低下造成的。最广泛使用的输入编码方案,例如基于泊松的速率编码,没有利用 SNN 的时间学习能力。本文提出了一种用于低延迟节能 SNN 的训练框架,该框架在输入层使用混合编码方案,其中在第一个时间步长期间直接应用图像的模拟像素值,并使用一种新的尖峰时间编码变体在随后的时间步中。特别是,每个隐藏层中的神经元被限制为每个图像最多触发一次,这增加了激活稀疏性。为了训练这些混合编码的 SNN,我们提出了一种基于梯度下降的基于尖峰时间依赖反向传播 (STDB) 机制的变体,该机制使用基于输出神经元尖峰时间和膜电位的新型交叉熵损失函数。由此产生的 SNN 减少了延迟和高激活稀疏性,显着提高了计算效率。特别是,我们在多个 VGG 架构上评估了我们针对来自 CIFAR-10 和 CIFAR-100 数据集的图像分类任务提出的训练方案。我们在 CIFAR-100 数据集上以 5 美元的时间步长实现了 66.46 美元\% 的 top-1 精度,与等效的标准 ANN 相比,计算能量减少了 125 倍。此外,
更新日期:2021-07-28
down
wechat
bug