当前位置: X-MOL 学术IEEE Trans. Circuit Syst. II Express Briefs › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A ReRAM-Based Computing-in-Memory Convolutional-Macro With Customized 2T2R Bit-Cell for AIoT Chip IP Applications
IEEE Transactions on Circuits and Systems II: Express Briefs ( IF 4.0 ) Pub Date : 2020-09-01 , DOI: 10.1109/tcsii.2020.3013336
Fei Tan , Yiming Wang , Yiming Yang , Liran Li , Tian Wang , Feng Zhang , Xinghua Wang , Jianfeng Gao , Yongpan Liu

To reduce the energy-consuming and time latency incurred by Von Neumann architecture, this brief developed a complete computing-in-memory (CIM) convolutional macro based on ReRAM array for the convolutional layers of a LeNet-like convolutional neural network (CNN). We binarized the input layer and the first convolutional layer to get higher accuracy. The proposed ReRAM-CIM convolutional macro is suitable as an IP core for any binarized neural networks’ convolutional layers. This brief customized a bit-cell consisting of 2T2R ReRAM cells, regarded ${9 \times 8}$ bit-cells as one unit to achieve high hardware compute accuracy, great read/compute speed, and low power consuming. The ReRAM-CIM convolutional macro achieved 50 ns product-sum computing time for one complete convolutional operation in a convolutional layer in the customized CNN, with an accuracy of 96.96% on MNIST database and a peak energy efficiency of 58.82 TOPS/W.

中文翻译:

基于 ReRAM 的内存计算卷积宏,具有用于 AIoT 芯片 IP 应用的定制 2T2R 位单元

为了减少冯诺依曼架构带来的能耗和时间延迟,本简报为类 LeNet 卷积神经网络 (CNN) 的卷积层开发了一个基于 ReRAM 阵列的完整内存计算 (CIM) 卷积宏。我们将输入层和第一个卷积层二值化以获得更高的精度。提出的 ReRAM-CIM 卷积宏适合作为任何二值化神经网络卷积层的 IP 核。本简报定制了一个由 2T2R ReRAM 单元组成的位单元,将 ${9 \times 8}$ 位单元视为一个单元,以实现高硬件计算精度、出色的读取/计算速度和低功耗。ReRAM-CIM 卷积宏在定制的 CNN 的卷积层中实现了 50 ns 乘积和计算时间,用于一个完整的卷积操作,
更新日期:2020-09-01
down
wechat
bug