Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
FSpiNN: An Optimization Framework for Memory-Efficient and Energy-Efficient Spiking Neural Networks
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems ( IF 2.7 ) Pub Date : 2020-10-02 , DOI: 10.1109/tcad.2020.3013049
Rachmad Vidya Wicaksana Putra , Muhammad Shafique

Spiking neural networks (SNNs) are gaining interest due to their event-driven processing which potentially consumes low-power/energy computations in hardware platforms while offering unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule. However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy, thereby making them difficult to be deployed on embedded systems, for instance, on battery-powered mobile devices and IoT Edge nodes. Toward this, we propose FSpiNN, an optimization framework for obtaining memory-efficient and energy-efficient SNNs for training and inference processing, with unsupervised learning capability while maintaining accuracy. It is achieved by: 1) reducing the computational requirements of neuronal and STDP operations; 2) improving the accuracy of STDP-based learning; 3) compressing the SNN through a fixed-point quantization; and 4) incorporating the memory and energy requirements in the optimization process. FSpiNN reduces the computational requirements by reducing the number of neuronal operations, the STDP-based synaptic weight updates, and the STDP complexity. To improve the accuracy of learning, FSpiNN employs timestep-based synaptic weight updates and adaptively determines the STDP potentiation factor and the effective inhibition strength. The experimental results show that as compared to the state-of-the-art work, FSpiNN achieves 7.5×7.5\times memory saving, and improves the energy efficiency by 3.5×3.5\times on average for training and by 1.8×1.8\times on average for inference, across MNIST and Fashion MNIST datasets, with no accuracy loss for a network with 4900 excitatory neurons, thereby enabling energy-efficient SNNs for edge devices/embedded systems.

中文翻译:


FSpiNN:内存高效和节能尖峰神经网络的优化框架



尖峰神经网络 (SNN) 因其事件驱动处理而受到关注,该处理可能会消耗硬件平台中的低功耗/能量计算,同时由于尖峰时序相关可塑性 (STDP) 规则而提供无监督学习能力。然而,最先进的 SNN 需要大量内存才能实现高精度,因此难以部署在嵌入式系统上,例如电池供电的移动设备和物联网边缘节点。为此,我们提出了 FSpiNN,这是一种优化框架,用于获得内存高效和节能的 SNN,用于训练和推理处理,具有无监督学习能力,同时保持准确性。它是通过以下方式实现的:1)减少神经元和 STDP 操作的计算要求; 2)提高基于STDP的学习的准确性; 3)通过定点量化压缩SNN; 4) 将内存和能量需求纳入优化过程。 FSpiNN 通过减少神经元操作数量、基于 STDP 的突触权重更新和 STDP 复杂性来降低计算要求。为了提高学习的准确性,FSpiNN 采用基于时间步长的突触权重更新,并自适应地确定 STDP 增强因子和有效抑制强度。实验结果表明,与state-of-the-art的工作相比,FSpiNN实现了7.5×7.5\times的内存节省,平均训练能效提高了3.5×3.5\times,平均能效提高了1.8×1.8\times跨 MNIST 和 Fashion MNIST 数据集的推理平均,具有 4900 个兴奋性神经元的网络没有精度损失,从而为边缘设备/嵌入式系统提供节能 SNN。
更新日期:2020-10-02
down
wechat
bug