当前位置: X-MOL 学术IEEE Embed. Syst. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An Overhead-free Max-pooling Method for SNN
IEEE Embedded Systems Letters ( IF 1.6 ) Pub Date : 2020-03-01 , DOI: 10.1109/les.2019.2919244
Shasha Guo , Lei Wang , Baozi Chen , Qiang Dou

Spiking neural networks (SNNs) have been shown to be accurate, fast, and efficient in classical machine vision tasks, such as object recognition or detection. It is typical to convert a pretrained deep neural network into an SNN since training SNN is not easy. The max-pooling (MP) function is widely adopted in most state-of-the-art deep neural networks. To maintain the accuracy of the SNN obtained through conversion, this function is an important element to be implemented. However, it is difficult due to the dynamic characteristics of spikes. As far as we know, existing solutions adopt additional technologies except the spiking neuron model to implement MP or approximate MP, which introduce overhead of memory storage and computation. In this letter, we propose a novel method that utilizes only the spiking neuron model to approximate MP. Our method does not incur any overhead. We validate our method with three datasets and six networks including three oxford visual geometry group-like networks. And the experimental results show that the performance (accuracy and convergence rate) of our method is as good as or even better than that of the existing method.

中文翻译:

一种用于 SNN 的无开销最大池化方法

尖峰神经网络 (SNN) 已被证明在经典机器视觉任务(例如对象识别或检测)中准确、快速且高效。由于训练 SNN 并不容易,因此通常将预训练的深度神经网络转换为 SNN。最大池化 (MP) 函数在大多数最先进的深度神经网络中被广泛采用。为了保持通过转换获得的 SNN 的准确性,此功能是要实现的重要元素。但是,由于尖峰的动态特性,这很困难。据我们所知,现有的解决方案除了采用尖峰神经元模型来实现 MP 或近似 MP 之外,还采用其他技术,这引入了内存存储和计算的开销。在这封信中,我们提出了一种新方法,该方法仅利用尖峰神经元模型来近似 MP。我们的方法不会产生任何开销。我们使用三个数据集和六个网络(包括三个牛津视觉几何组状网络)验证我们的方法。并且实验结果表明,我们的方法的性能(准确率和收敛速度)与现有方法一样好甚至更好。
更新日期:2020-03-01
down
wechat
bug