当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Enabling Resource-Aware Mapping of Spiking Neural Networks via Spatial Decomposition
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-09-19 , DOI: arxiv-2009.09298
Adarsha Balaji, Shihao Song, Anup Das, Jeffrey Krichmar, Nikil Dutt, James Shackleford, Nagarajan Kandasamy, Francky Catthoor

With growing model complexity, mapping Spiking Neural Network (SNN)-based applications to tile-based neuromorphic hardware is becoming increasingly challenging. This is because the synaptic storage resources on a tile, viz. a crossbar, can accommodate only a fixed number of pre-synaptic connections per post-synaptic neuron. For complex SNN models that have many pre-synaptic connections per neuron, some connections may need to be pruned after training to fit onto the tile resources, leading to a loss in model quality, e.g., accuracy. In this work, we propose a novel unrolling technique that decomposes a neuron function with many pre-synaptic connections into a sequence of homogeneous neural units, where each neural unit is a function computation node, with two pre-synaptic connections. This spatial decomposition technique significantly improves crossbar utilization and retains all pre-synaptic connections, resulting in no loss of the model quality derived from connection pruning. We integrate the proposed technique within an existing SNN mapping framework and evaluate it using machine learning applications on the DYNAP-SE state-of-the-art neuromorphic hardware. Our results demonstrate an average 60% lower crossbar requirement, 9x higher synapse utilization, 62% lower wasted energy on the hardware, and between 0.8% and 4.6% increase in model quality.

中文翻译:

通过空间分解实现尖峰神经网络的资源感知映射

随着模型复杂性的增加,将基于尖峰神经网络 (SNN) 的应用程序映射到基于 tile 的神经形态硬件变得越来越具有挑战性。这是因为磁贴上的突触存储资源,即。横杆,每个突触后神经元只能容纳固定数量的突触前连接。对于每个神经元具有许多突触前连接的复杂 SNN 模型,在训练后可能需要修剪一些连接以适应瓦片资源,从而导致模型质量(例如准确性)的损失。在这项工作中,我们提出了一种新的展开技术,该技术将具有许多突触前连接的神经元功能分解为一系列同构神经单元,其中每个神经单元是一个函数计算节点,具有两个突触前连接。这种空间分解技术显着提高了交叉开关的利用率并保留了所有突触前连接,从而不会损失源自连接修剪的模型质量。我们将提议的技术集成到现有的 SNN 映射框架中,并在 DYNAP-SE 最先进的神经形态硬件上使用机器学习应用程序对其进行评估。我们的结果表明,crossbar 要求平均降低了 60%,突触利用率提高了 9 倍,硬件上的能源浪费降低了 62%,模型质量提高了 0.8% 到 4.6%。我们将提议的技术集成到现有的 SNN 映射框架中,并在 DYNAP-SE 最先进的神经形态硬件上使用机器学习应用程序对其进行评估。我们的结果表明,crossbar 要求平均降低了 60%,突触利用率提高了 9 倍,硬件上的能源浪费降低了 62%,模型质量提高了 0.8% 到 4.6%。我们将提议的技术集成到现有的 SNN 映射框架中,并在 DYNAP-SE 最先进的神经形态硬件上使用机器学习应用程序对其进行评估。我们的结果表明,crossbar 要求平均降低了 60%,突触利用率提高了 9 倍,硬件上的能源浪费降低了 62%,模型质量提高了 0.8% 到 4.6%。
更新日期:2020-09-22
down
wechat
bug