当前位置: X-MOL 学术ACM J. Emerg. Technol. Comput. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
RNNFast
ACM Journal on Emerging Technologies in Computing Systems ( IF 2.1 ) Pub Date : 2020-09-18 , DOI: 10.1145/3399670
Mohammad Hossein Samavatian 1 , Anys Bacha 2 , Li Zhou 1 , Radu Teodorescu 1
Affiliation  

Recurrent Neural Networks (RNNs) are an important class of neural networks designed to retain and incorporate context into current decisions. RNNs are particularly well suited for machine learning problems in which context is important, such as speech recognition and language translation. This work presents RNNFast, a hardware accelerator for RNNs that leverages an emerging class of non-volatile memory called domain-wall memory (DWM). We show that DWM is very well suited for RNN acceleration due to its very high density and low read/write energy. At the same time, the sequential nature of input/weight processing of RNNs mitigates one of the downsides of DWM, which is the linear (rather than constant) data access time. RNNFast is very efficient and highly scalable, with flexible mapping of logical neurons to RNN hardware blocks. The basic hardware primitive, the RNN processing element (PE), includes custom DWM-based multiplication, sigmoid and tanh units for high density and low energy. The accelerator is designed to minimize data movement by closely interleaving DWM storage and computation. We compare our design with a state-of-the-art GPGPU and find 21.8× higher performance with 70× lower energy.

中文翻译:

RNN快速

循环神经网络 (RNN) 是一类重要的神经网络,旨在保留上下文并将其合并到当前决策中。RNN 特别适用于上下文很重要的机器学习问题,例如语音识别和语言翻译。这项工作介绍了 RNNFast,这是一种用于 RNN 的硬件加速器,它利用了一类新兴的非易失性存储器,称为域壁存储器 (DWM)。我们展示了 DWM 非常适合 RNN 加速,因为它具有非常高的密度和低读/写能量。同时,RNN 的输入/权重处理的顺序性减轻了 DWM 的缺点之一,即线性(而不是恒定)数据访问时间。RNNFast 非常高效且具有高度可扩展性,可以灵活地将逻辑神经元映射到 RNN 硬件块。基本硬件原语 RNN 处理元素 (PE) 包括自定义的基于 DWM 的乘法、sigmoid 和 tanh 单元,用于高密度和低能量。该加速器旨在通过紧密交错的 DWM 存储和计算来最大程度地减少数据移动。我们将我们的设计与最先进的 GPGPU 进行比较,发现性能提高了 21.8 倍,能耗降低了 70 倍。
更新日期:2020-09-18
down
wechat
bug