当前位置: X-MOL 学术arXiv.cs.ET › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Reliable and Energy Efficient MLC STT-RAM Buffer for CNN Accelerators
arXiv - CS - Emerging Technologies Pub Date : 2020-01-14 , DOI: arxiv-2001.08806
Masoomeh Jasemi, Shaahin Hessabi, Nader Bagherzadeh

We propose a lightweight scheme where the formation of a data block is changed in such a way that it can tolerate soft errors significantly better than the baseline. The key insight behind our work is that CNN weights are normalized between -1 and 1 after each convolutional layer, and this leaves one bit unused in half-precision floating-point representation. By taking advantage of the unused bit, we create a backup for the most significant bit to protect it against the soft errors. Also, considering the fact that in MLC STT-RAMs the cost of memory operations (read and write), and reliability of a cell are content-dependent (some patterns take larger current and longer time, while they are more susceptible to soft error), we rearrange the data block to minimize the number of costly bit patterns. Combining these two techniques provides the same level of accuracy compared to an error-free baseline while improving the read and write energy by 9% and 6%, respectively.

中文翻译:

用于 CNN 加速器的可靠且节能的 MLC STT-RAM 缓冲器

我们提出了一种轻量级方案,其中数据块的形成以这样一种方式改变,即它可以比基线更好地容忍软错误。我们工作背后的关键见解是在每个卷积层之后 CNN 权重在 -1 和 1 之间归一化,这在半精度浮点表示中留下了一位未使用的位。通过利用未使用的位,我们为最重要的位创建备份以保护它免受软错误的影响。此外,考虑到在 MLC STT-RAM 中,存储器操作(读取和写入)的成本和单元的可靠性取决于内容(某些模式需要更大的电流和更长的时间,但它们更容易受到软错误的影响) ,我们重新排列数据块以最小化代价高昂的位模式的数量。
更新日期:2020-01-27
down
wechat
bug