当前位置: X-MOL 学术arXiv.cs.ET › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
NEAT: Non-linearity Aware Training for Accurate and Energy-Efficient Implementation of Neural Networks on 1T-1R Memristive Crossbars
arXiv - CS - Emerging Technologies Pub Date : 2020-12-01 , DOI: arxiv-2012.00261
Abhiroop Bhattacharjee, Lakshya Bhatnagar, Youngeun Kim, Priyadarshini Panda

Memristive crossbars suffer from non-idealities (such as, sneak paths) that degrade computational accuracy of the Deep Neural Networks (DNNs) mapped onto them. A 1T-1R synapse, adding a transistor (1T) in series with the memristive synapse (1R), has been proposed to mitigate such non-idealities. We observe that the non-linear characteristics of the transistor affect the overall conductance of the 1T-1R cell which in turn affects the Matrix-Vector-Multiplication (MVM) operation in crossbars. This 1T-1R non-ideality arising from the input voltage-dependent non-linearity is not only difficult to model or formulate, but also causes a drastic performance degradation of DNNs when mapped onto crossbars. In this paper, we analyse the non-linearity of the 1T-1R crossbar and propose a novel Non-linearity Aware Training (NEAT) method to address the non-idealities. Specifically, we first identify the range of network weights, which can be mapped into the 1T-1R cell within the linear operating region of the transistor. Thereafter, we regularize the weights of the DNNs to exist within the linear operating range by using iterative training algorithm. Our iterative training significantly recovers the classification accuracy drop caused by the non-linearity. Moreover, we find that each layer has a different weight distribution and in turn requires different gate voltage of transistor to guarantee linear operation. Based on this observation, we achieve energy efficiency while preserving classification accuracy by applying heterogeneous gate voltage control to the 1T-1R cells across different layers. Finally, we conduct various experiments on CIFAR10 and CIFAR100 benchmark datasets to demonstrate the effectiveness of our non-linearity aware training. Overall, NEAT yields ~20% energy gain with less than 1% accuracy loss (with homogeneous gate control) when mapping ResNet18 networks on 1T-1R crossbars.

中文翻译:

NEAT:非线性感知训练,用于在1T-1R忆阻交叉开关上准确高效地实现神经网络

忆阻性纵横制开关遭受非理想性(例如潜行路径)的困扰,这降低了映射到其上的深度神经网络(DNN)的计算精度。已经提出了一种1T-1R突触,它增加了与忆阻突触(1R)串联的晶体管(1T),以减轻这种不理想的情况。我们观察到晶体管的非线性特性会影响1T-1R单元的整体电导,进而会影响交叉开关中的矩阵矢量乘法(MVM)操作。由输入电压相关的非线性引起的这种1T-1R不理想不仅难以建模或公式化,而且在映射到交叉开关上时会导致DNN的性能急剧下降。在本文中,我们分析了1T-1R交叉开关的非线性,并提出了一种新颖的非线性意识训练(NEAT)方法来解决非理想性问题。具体来说,我们首先确定网络权重的范围,该权重范围可以映射到晶体管的线性工作区域内的1T-1R单元中。此后,我们使用迭代训练算法将DNN的权重调整为存在于线性操作范围内。我们的迭代训练可以极大地恢复由于非线性导致的分类精度下降。此外,我们发现每一层具有不同的重量分布,进而需要不同的晶体管栅极电压来保证线性工作。基于这一观察,我们通过对不同层的1T-1R单元应用异构栅极电压控制,在保持分类精度的同时实现了能源效率。最后,我们对CIFAR10和CIFAR100基准数据集进行了各种实验,以证明我们的非线性意识训练的有效性。总体而言,当在1T-1R交叉开关上绘制ResNet18网络时,NEAT可以产生约20%的能量增益,而准确度损失不到1%(采用均匀门控制)。
更新日期:2020-12-02
down
wechat
bug