当前位置: X-MOL 学术arXiv.cs.ET › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Microarchitecture Implementation Framework for Online Learning with Temporal Neural Networks
arXiv - CS - Emerging Technologies Pub Date : 2021-05-27 , DOI: arxiv-2105.13262
Harideep Nair, John Paul Shen, James E. Smith

Temporal Neural Networks (TNNs) are spiking neural networks that use time as a resource to represent and process information, similar to the mammalian neocortex. In contrast to compute-intensive Deep Neural Networks that employ separate training and inference phases, TNNs are capable of extremely efficient online incremental/continuous learning and are excellent candidates for building edge-native sensory processing units. This work proposes a microarchitecture framework for implementing TNNs using standard CMOS. Gate-level implementations of three key building blocks are presented: 1) multi-synapse neurons, 2) multi-neuron columns, and 3) unsupervised and supervised online learning algorithms based on Spike Timing Dependent Plasticity (STDP). The TNN microarchitecture is embodied in a set of characteristic scaling equations for assessing the gate count, area, delay and power consumption for any TNN design. Post-synthesis results (in 45nm CMOS) for the proposed designs are presented, and their online incremental learning capability is demonstrated.

中文翻译:

使用时间神经网络进行在线学习的微架构实现框架

时间神经网络 (TNN) 是尖峰神经网络,它使用时间作为资源来表示和处理信息,类似于哺乳动物新皮层。与采用单独的训练和推理阶段的计算密集型深度神经网络相反,TNN具有极高效率的在线增量/连续学习能力,并且是构建边缘本机感觉处理单元的极佳候选者。这项工作提出了一个使用标准 CMOS 实现 TNN 的微架构框架。介绍了三个关键构建块的门级实现:1) 多突触神经元,2) 多神经元列,以及 3) 基于尖峰时序相关可塑性 (STDP) 的无监督和有监督的在线学习算法。TNN 微架构体现在一组特征缩放方程中,用于评估任何 TNN 设计的门数、面积、延迟和功耗。展示了所提议设计的综合后结果(在 45nm CMOS 中),并展示了它们的在线增量学习能力。
更新日期:2021-05-28
down
wechat
bug