当前位置: X-MOL 学术Faraday Discuss. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Neuromorphic computation with spiking memristors: habituation, experimental instantiation of logic gates and a novel sequence-sensitive perceptron model.
Faraday Discussions ( IF 3.4 ) Pub Date : 2019-02-18 , DOI: 10.1039/c8fd00111a
Ella M Gale 1
Affiliation  

Memristors have been compared to neurons and synapses, suggesting they would be good for neuromorphic computing. A change in voltage across a memristor causes a current spike which imparts a short-term memory to a memristor, allowing for through-time computation, which can do arithmetical operations and sequential logic, or model short-time habituation to a stimulus. Using simple physical rules, simple logic gates such as XOR, and novel, more complex, gates such as the arithmetic full adder (AFA) can be instantiated in sol-gel TiO2 plastic memristors. The adder makes use of the memristor's short-term memory to add together three binary values and outputs the sum, the carry digit and even the order they were input in, allowing for logically (but not physically reversible) computation. Only a single memristor is required to instantiate each gate, as additional input/output ports can be replaced with extra time-steps allowing a single memristor to do a hitherto unexpectedly large amount of computation, which may mitigate the memristor's slow operation speed and may relate to how neurons do a similarly large computation with slow operation speeds. These logic gates can be understood by modelling the memristors as a novel type of perceptron: one which is sensitive to input order. The memristor's short-term memory can change the input weights applied to later inputs, and thus the memristor gates cannot be accurately described by a single perceptron, requiring either a network of time-invariant perceptrons, or a sequence-sensitive self-reprogrammable perceptron. Thus, the AFA is best described as a sequence-sensitive perceptron that sorts binary inputs into classes corresponding to the arithmetical sum of the inputs. Co-development of memristor hardware alongside software (sequence-sensitive perceptron) models in trained neural networks would allow the porting of modern deep-neural networks architecture to low-power hardware neural net chips.

中文翻译:

具有尖峰忆阻器的神经形态计算:习惯化,逻辑门的实验实例化和新型的序列敏感感知器模型。

忆阻器已经与神经元和突触进行了比较,表明它们对于神经形态计算将是很好的。忆阻器两端的电压变化会引起电流尖峰,这会给忆阻器带来短期记忆,从而允许进行可进行算术运算和顺序逻辑的全时计算,或对刺激的短时适应建模。使用简单的物理规则,可以在溶胶-凝胶TiO2塑料忆阻器中实例化简单的逻辑门(例如XOR)和新颖的,更复杂的门(例如算术全加法器(AFA))。加法器利用忆阻器的短期存储器将三个二进制值相加,并输出总和,进位数字甚至它们输入的顺序,从而进行逻辑(但物理上不可逆)计算。只需一个忆阻器即可实例化每个门,因为额外的输入/输出端口可以用额外的时间步长代替,从而使单个忆阻器能够进行迄今为止出乎意料的大量计算,这可能减轻了忆阻器的慢速运行速度,并且可能与神经元如何在慢速运行时进行类似的大型计算有关速度。这些逻辑门可以通过将忆阻器建模为一种新型的感知器来理解:一种对输入顺序敏感的感知器。忆阻器的短期存储器可以更改应用于后续输入的输入权重,因此忆阻器门无法由单个感知器准确描述,这需要时不变感知器网络或序列敏感的可自编程感知器。因此,最好将AFA描述为序列敏感感知器,它将二进制输入分类为与输入的算术总和相对应的类。训练有素的神经网络中忆阻器硬件与软件(序列敏感感知器)模型的共同开发将允许将现代深层神经网络架构移植到低功耗硬件神经网络芯片上。
更新日期:2019-02-19
down
wechat
bug