当前位置: X-MOL 学术IEEE J. Emerg. Sel. Top. Circuits Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Parasitic-Aware Modeling and Neural Network Training Scheme for Energy-Efficient Processing-in-Memory With Resistive Crossbar Array
IEEE Journal on Emerging and Selected Topics in Circuits and Systems ( IF 4.6 ) Pub Date : 2022-05-03 , DOI: 10.1109/jetcas.2022.3172170
Tiancheng Cao 1 , Chen Liu 2 , Yuan Gao 2 , Wang Ling Goh 3
Affiliation  

This paper presents a parasitic-aware modeling method for fast and accurate simulation of Processing-in-Memory (PIM) neural network (NN) implemented in resistive memristor crossbar array. This work proposed an efficient and accurate crossbar line resistance estimation model named $\gamma $ -compact model, and the associated NN training scheme that takes the impact of line resistance into consideration with time complexity of O(mn) for a $\text{m}\times \text{n}$ resistive crossbar. The impact of the crossbar array parasitics to the vector-matrix multiplication (VMM) computation accuracy and multi-layer PIM NN inference accuracy are analyzed in detail. The advantage of the proposed model is demonstrated by a multilayer perceptron (MLP) of size $784\times 128\times 10$ implemented with resistive random access memory (RRAM) crossbar arrays for MNIST hand-written digits classification. The proposed method reduced the VMM computation error by 186 and 17 times compared to uncompensated method and the state-of-art compensation method, respectively. Maximum 98.1% inference accuracy is achieved with only 0.17% degradation compared to the ideal model.

中文翻译:

具有电阻式交叉阵列的高效内存处理的寄生感知建模和神经网络训练方案

本文提出了一种寄生感知建模方法,用于快速准确地模拟在电阻忆阻器交叉阵列中实现的内存处理 (PIM) 神经网络 (NN)。这项工作提出了一种高效准确的交叉线电阻估计模型,命名为 $\伽马$-compact 模型,以及相关的 NN 训练方案,该方案考虑了线路电阻的影响,时间复杂度为O (mn) $\text{m}\times \text{n}$电阻横杆。详细分析了交叉开关阵列寄生对向量矩阵乘法(VMM)计算精度和多层PIM NN推理精度的影响。所提出模型的优势通过多层感知器 (MLP) 的大小来证明 $784\乘以 128\乘以 10$使用电阻随机存取存储器 (RRAM) 交叉开关阵列实现 MNIST 手写数字分类。与未补偿方法和最先进的补偿方法相比,所提出的方法分别将 VMM 计算误差降低了 186 倍和 17 倍。与理想模型相比,推理精度最高可达 98.1%,仅下降 0.17%。
更新日期:2022-05-03
down
wechat
bug