当前位置: X-MOL 学术arXiv.cs.ET › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Direct CMOS Implementation of Neuromorphic Temporal Neural Networks for Sensory Processing
arXiv - CS - Emerging Technologies Pub Date : 2020-08-27 , DOI: arxiv-2009.00457
Harideep Nair, John Paul Shen, James E. Smith

Temporal Neural Networks (TNNs) use time as a resource to represent and process information, mimicking the behavior of the mammalian neocortex. This work focuses on implementing TNNs using off-the-shelf digital CMOS technology. A microarchitecture framework is introduced with a hierarchy of building blocks including: multi-neuron columns, multi-column layers, and multi-layer TNNs. We present the direct CMOS gate-level implementation of the multi-neuron column model as the key building block for TNNs. Post-synthesis results are obtained using Synopsys tools and the 45 nm CMOS standard cell library. The TNN microarchitecture framework is embodied in a set of characteristic equations for assessing the total gate count, die area, compute time, and power consumption for any TNN design. We develop a multi-layer TNN prototype of 32M gates. In 7 nm CMOS process, it consumes only 1.54 mm^2 die area and 7.26 mW power and can process 28x28 images at 107M FPS (9.34 ns per image). We evaluate the prototype's performance and complexity relative to a recent state-of-the-art TNN model.

中文翻译:

用于感觉处理的神经形态时间神经网络的直接 CMOS 实现

时间神经网络 (TNN) 使用时间作为资源来表示和处理信息,模仿哺乳动物新皮层的行为。这项工作的重点是使用现成的数字 CMOS 技术实现 TNN。引入了一个微架构框架,其中包含一个层次结构的构建块,包括:多神经元列、多列层和多层 TNN。我们将多神经元列模型的直接 CMOS 门级实现作为 TNN 的关键构建块。合成后结果是使用 Synopsys 工具和 45 nm CMOS 标准单元库获得的。TNN 微架构框架体现在一组特征方程中,用于评估任何 TNN 设计的总门数、芯片面积、计算时间和功耗。我们开发了一个 32M 门的多层 TNN 原型。在 7 nm CMOS 工艺中,它仅消耗 1.54 mm^2 裸片面积和 7.26 mW 功率,可以以 107M FPS(每幅图像 9.34 ns)处理 28x28 图像。我们相对于最近最先进的 TNN 模型评估原型的性能和复杂性。
更新日期:2020-09-02
down
wechat
bug