当前位置: X-MOL 学术IEEE Trans. Very Larg. Scale Integr. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Development of a Short-Term to Long-Term Supervised Spiking Neural Network Processor
IEEE Transactions on Very Large Scale Integration (VLSI) Systems ( IF 2.8 ) Pub Date : 2020-08-18 , DOI: 10.1109/tvlsi.2020.3013810
Tony James Bailey , Andrew J. Ford , Siddharth Barve , Jacob Wells , Rashmi Jha

We report a realization of a mixed-signal, supervised spiking neural network (SNN) architecture utilizing short-term plasticity in synaptic resistive random access memory (RRAM). First, the development of a phenomenological RRAM SPICE model is discussed based on the previously reported device data. Then, the design of the neuroprocessor's architectural components are described. To achieve learning using the synaptic RRAM devices, a novel method of backpropagation in hardware SNNs is presented using the proposed gated bidirectional amplifier circuit. A method to perform quantized weight transfer between the short-term memory (STM) and long-term memory (LTM) is also proposed, allowing transient associated memories to be stored and used repeatedly. The neuroprocessor is able to associate input digits with class labels, transfer learned associations to a long-term register array, then recall all digits when presented again. The low operational power of 13.7 mW makes this system ideal for future integration onto embedded systems with limited available energy. Finally, the neuroprocessor's tolerance to input noise and internal device failure was measured to be 14% and 15%, respectively. We believe that this work provides significant insight into the development of hardware SNNs in addition to providing a framework to achieve more complex STM to LTM interactions in the future.

中文翻译:


短期到长期监督尖峰神经网络处理器的开发



我们报告了利用突触电阻随机存取存储器(RRAM)中的短期可塑性实现混合信号、监督尖峰神经网络(SNN)架构。首先,根据先前报道的器件数据讨论唯象 RRAM SPICE 模型的开发。然后,描述了神经处理器的架构组件的设计。为了使用突触 RRAM 器件实现学习,使用所提出的门控双向放大器电路提出了一种硬件 SNN 中反向传播的新方法。还提出了一种在短期记忆(STM)和长期记忆(LTM)之间进行量化权重转移的方法,允许瞬态关联记忆被重复存储和使用。神经处理器能够将输入数字与类别标签相关联,将学习到的关联转移到长期寄存器数组,然后在再次出现时调用所有数字。 13.7 mW 的低运行功耗使该系统非常适合未来集成到可用能量有限的嵌入式系统中。最后,测得神经处理器对输入噪声和内部设备故障的容忍度分别为 14% 和 15%。我们相信,这项工作除了提供一个框架来实现未来更复杂的 STM 到 LTM 交互之外,还为硬件 SNN 的开发提供了重要的见解。
更新日期:2020-08-18
down
wechat
bug