当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DIET-SNN: A Low-Latency Spiking Neural Network With Direct Input Encoding and Leakage and Threshold Optimization
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.2 ) Pub Date : 2021-10-02 , DOI: 10.1109/tnnls.2021.3111897
Nitin Rathi , Kaushik Roy

Bioinspired spiking neural networks (SNNs), operating with asynchronous binary signals (or spikes) distributed over time, can potentially lead to greater computational efficiency on event-driven hardware. The state-of-the-art SNNs suffer from high inference latency, resulting from inefficient input encoding and suboptimal settings of the neuron parameters (firing threshold and membrane leak). We propose DIET-SNN, a low-latency deep spiking network trained with gradient descent to optimize the membrane leak and the firing threshold along with other network parameters (weights). The membrane leak and threshold of each layer are optimized with end-to-end backpropagation to achieve competitive accuracy at reduced latency. The input layer directly processes the analog pixel values of an image without converting it to spike train. The first convolutional layer converts analog inputs into spikes where leaky-integrate-and-fire (LIF) neurons integrate the weighted inputs and generate an output spike when the membrane potential crosses the trained firing threshold. The trained membrane leak selectively attenuates the membrane potential, which increases activation sparsity in the network. The reduced latency combined with high activation sparsity provides massive improvements in computational efficiency. We evaluate DIET-SNN on image classification tasks from CIFAR and ImageNet datasets on VGG and ResNet architectures. We achieve top-1 accuracy of 69% with five timesteps (inference latency) on the ImageNet dataset with 12×12\times less compute energy than an equivalent standard artificial neural network (ANN). In addition, DIET-SNN performs 20– 500×500\times faster inference compared to other state-of-the-art SNN models.

中文翻译:


DIET-SNN:具有直接输入编码以及泄漏和阈值优化的低延迟尖峰神经网络



仿生尖峰神经网络 (SNN) 使用随时间分布的异步二进制信号(或尖峰)进行操作,可能会提高事件驱动硬件的计算效率。最先进的 SNN 存在高推理延迟,这是由于输入编码效率低下和神经元参数(激发阈值和膜泄漏)的次优设置造成的。我们提出了 DIET-SNN,这是一种低延迟深度尖峰网络,通过梯度下降进行训练,以优化膜泄漏和触发阈值以及其他网络参数(权重)。每层的膜泄漏和阈值都通过端到端反向传播进行优化,以在减少延迟的情况下实现有竞争力的精度。输入层直接处理图像的模拟像素值,而不将其转换为脉冲序列。第一个卷积层将模拟输入转换为尖峰,其中漏积分激发(LIF)神经元对加权输入进行积分,并在膜电位超过训练的激发阈值时生成输出尖峰。经过训练的膜泄漏选择性地减弱膜电位,从而增加网络中的激活稀疏性。减少的延迟与高激活稀疏性相结合,大大提高了计算效率。我们在 VGG 和 ResNet 架构上的 CIFAR 和 ImageNet 数据集的图像分类任务上评估 DIET-SNN。我们在 ImageNet 数据集上通过 5 个时间步长(推理延迟)实现了 69% 的 top-1 准确率,并且计算能量比同等标准人工神经网络 (ANN) 少 12×12\ 倍。此外,与其他最先进的 SNN 模型相比,DIET-SNN 的推理速度快 20–500×500 倍。
更新日期:2021-10-02
down
wechat
bug