当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Novel Approximate Hamming Weight Computing for Spiking Neural Networks: an FPGA Friendly Architecture
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2021-04-29 , DOI: arxiv-2104.14594
Kaveh Akbarzadeh-Sherbaf, Mikaeel Bahmani, Danial Ghiaseddin, Saeed Safari, Abdol-Hossein Vahabie

Hamming weights of sparse and long binary vectors are important modules in many scientific applications, particularly in spiking neural networks that are of our interest. To improve both area and latency of their FPGA implementations, we propose a method inspired from synaptic transmission failure for exploiting FPGA lookup tables to compress long input vectors. To evaluate the effectiveness of this approach, we count the number of `1's of the compressed vector using a simple linear adder. We classify the compressors into shallow ones with up to two levels of lookup tables and deep ones with more than two levels. The architecture generated by this approach shows up to 82% and 35% reductions for different configurations of shallow compressors in area and latency respectively. Moreover, our simulation results show that calculating the Hamming weight of a 1024-bit vector of a spiking neural network by the use of only deep compressors preserves the chaotic behavior of the network while slightly impacts on the learning performance.

中文翻译:

适用于尖峰神经网络的新型近似汉明权重计算:FPGA友好架构

稀疏和长二进制矢量的汉明权重在许多科学应用中都是重要的模块,尤其是在我们感兴趣的尖峰神经网络中。为了改善其FPGA实现的面积和延迟,我们提出了一种从突触传输失败中获得启发的方法,该方法可利用FPGA查找表来压缩长输入向量。为了评估这种方法的有效性,我们使用一个简单的线性加法器计算压缩向量的“ 1”个数。我们将压缩器分类为具有最多两个级别的查找表的浅压缩器和具有两个以上级别的查找表的深压缩器。通过这种方法生成的体系结构在面积和等待时间方面,针对不同配置的浅层压缩机分别显示出分别减少了82%和35%。而且,
更新日期:2021-05-03
down
wechat
bug