当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Data-Driven Neuromorphic DRAM-based CNN and RNN Accelerators
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2020-03-29 , DOI: arxiv-2003.13006
Tobi Delbruck, Shih-Chii Liu

The energy consumed by running large deep neural networks (DNNs) on hardware accelerators is dominated by the need for lots of fast memory to store both states and weights. This large required memory is currently only economically viable through DRAM. Although DRAM is high-throughput and low-cost memory (costing 20X less than SRAM), its long random access latency is bad for the unpredictable access patterns in spiking neural networks (SNNs). In addition, accessing data from DRAM costs orders of magnitude more energy than doing arithmetic with that data. SNNs are energy-efficient if local memory is available and few spikes are generated. This paper reports on our developments over the last 5 years of convolutional and recurrent deep neural network hardware accelerators that exploit either spatial or temporal sparsity similar to SNNs but achieve SOA throughput, power efficiency and latency even with the use of DRAM for the required storage of the weights and states of large DNNs.

中文翻译:

基于数据驱动的神经形态 DRAM CNN 和 RNN 加速器

在硬件加速器上运行大型深度神经网络 (DNN) 所消耗的能量主要取决于需要大量快速内存来存储状态和权重。如此大的所需内存目前仅通过 DRAM 在经济上可行。尽管 DRAM 是高吞吐量和低成本的内存(成本比 SRAM 低 20 倍),但其较长的随机访问延迟不利于尖峰神经网络 (SNN) 中不可预测的访问模式。此外,访问 DRAM 中的数据比使用该数据进行算术消耗的能量要高出几个数量级。如果本地内存可用并且产生的尖峰信号很少,则 SNN 是节能的。
更新日期:2020-03-31
down
wechat
bug