当前位置: X-MOL 学术arXiv.cs.AR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Compiling Spiking Neural Networks to Mitigate Neuromorphic Hardware Constraints
arXiv - CS - Hardware Architecture Pub Date : 2020-11-27 , DOI: arxiv-2011.13965
Adarsha Balaji, Anup Das

Spiking Neural Networks (SNNs) are efficient computation models to perform spatio-temporal pattern recognition on {resource}- and {power}-constrained platforms. SNNs executed on neuromorphic hardware can further reduce energy consumption of these platforms. With increasing model size and complexity, mapping SNN-based applications to tile-based neuromorphic hardware is becoming increasingly challenging. This is attributed to the limitations of neuro-synaptic cores, viz. a crossbar, to accommodate only a fixed number of pre-synaptic connections per post-synaptic neuron. For complex SNN-based models that have many neurons and pre-synaptic connections per neuron, (1) connections may need to be pruned after training to fit onto the crossbar resources, leading to a loss in model quality, e.g., accuracy, and (2) the neurons and synapses need to be partitioned and placed on the neuro-sypatic cores of the hardware, which could lead to increased latency and energy consumption. In this work, we propose (1) a novel unrolling technique that decomposes a neuron function with many pre-synaptic connections into a sequence of homogeneous neural units to significantly improve the crossbar utilization and retain all pre-synaptic connections, and (2) SpiNeMap, a novel methodology to map SNNs on neuromorphic hardware with an aim to minimize energy consumption and spike latency.

中文翻译:

编译尖峰神经网络以减轻神经形态硬件约束

尖峰神经网络(SNN)是高效的计算模型,可在受{resource}和{power}约束的平台上执行时空模式识别。在神经形态硬件上执行的SNN可以进一步降低这些平台的能耗。随着模型大小和复杂性的增加,将基于SNN的应用程序映射到基于图块的神经形态硬件变得越来越具有挑战性。这归因于神经突触核心的局限性。交叉开关,每个突触后神经元仅容纳固定数量的突触前连接。对于具有许多神经元和每个神经元的突触前连接的基于复杂SNN的模型,(1)在训练以适合纵横制资源后可能需要修剪连接,从而导致模型质量(例如准确性)下降,(2)神经元和突触需要进行划分,并放置在硬件的神经突触核上,这可能导致等待时间和能量消耗增加。在这项工作中,我们提出(1)一种新颖的展开技术,该技术将具有许多突触前连接的神经元功能分解为一系列均匀的神经单元,以显着提高交叉开关的利用率并保留所有突触前连接,以及(2)SpiNeMap ,一种在神经形态硬件上映射SNN的新颖方法,旨在最大程度地减少能耗和尖峰延迟。
更新日期:2020-12-01
down
wechat
bug