当前位置: X-MOL 学术J. Circuits Syst. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
HPM: High-Precision Modeling of a Low-Power Inverter-Based Memristive Neural Network
Journal of Circuits, Systems and Computers ( IF 1.5 ) Pub Date : 2021-05-26 , DOI: 10.1142/s0218126621502741
Negin Mohajeri 1 , Behzad Ebrahimi 1 , Massoud Dousti 1
Affiliation  

In this paper, we propose a high-precision memristive neural network with neurons implemented by complementary metal oxide semiconductor (CMOS) inverters. Regarding the process variations in the memristors and the sensitivity of the memristive crossbar structure to these fluctuations, the read operation with repetitive pulses and feedback-based write in the memristors are used to implement the neural networks trained by the ex-situ method. Moreover, accurate modeling of the neuron circuit (CMOS inverter) and decreasing the mismatch between trained weights and the limited memristances fill the gap between simulation and implementation. To employ physical constraints based on the memristor framework during the training phase, a linear function is utilized to map the trained weights to the acceptable range of memristances after the training phase. To solve the vanishing gradient problem due to the use of the tanh function as an activation function and for better learning of the network, some measures are taken. Moreover, fin field-effect transistor (FinFET) technology is used to prevent the reduction of the accuracy of the inverter-based memristive neural networks due to the process variations. Overall, our implementation improves the speed, area, power-delay product (PDP), and mean square error (MSE) of the training stage by 91.43%, 95.06%, 48.29% and 81.64%, respectively.

中文翻译:

HPM:基于低功耗逆变器的忆阻神经网络的高精度建模

在本文中,我们提出了一种高精度忆阻神经网络,其神经元由互补金属氧化物半导体 (CMOS) 逆变器实现。关于忆阻器的工艺变化和忆阻交叉开关结构对这些波动的敏感性,采用重复脉冲的读取操作和忆阻器中基于反馈的写入来实现异位方法训练的神经网络。此外,神经元电路(CMOS 反相器)的精确建模和减少训练权重和有限忆阻之间的不匹配填补了模拟和实现之间的空白。为了在训练阶段采用基于忆阻器框架的物理约束,在训练阶段之后使用线性函数将训练的权重映射到忆阻器的可接受范围。为了解决由于使用 tanh 函数作为激活函数而导致的梯度消失问题以及更好地学习网络,采取了一些措施。此外,鳍式场效应晶体管 (FinFET) 技术用于防止基于逆变器的忆阻神经网络由于工艺变化而降低精度。总体而言,我们的实现将训练阶段的速度、面积、功率延迟积 (PDP) 和均方误差 (MSE) 分别提高了 91.43%、95.06%、48.29% 和 81.64%。鳍式场效应晶体管 (FinFET) 技术用于防止基于逆变器的忆阻神经网络由于工艺变化而降低精度。总体而言,我们的实现将训练阶段的速度、面积、功率延迟积 (PDP) 和均方误差 (MSE) 分别提高了 91.43%、95.06%、48.29% 和 81.64%。鳍式场效应晶体管 (FinFET) 技术用于防止基于逆变器的忆阻神经网络由于工艺变化而降低精度。总体而言,我们的实现将训练阶段的速度、面积、功率延迟积 (PDP) 和均方误差 (MSE) 分别提高了 91.43%、95.06%、48.29% 和 81.64%。
更新日期:2021-05-26
down
wechat
bug