当前位置: X-MOL 学术IEEE Trans. Circuits Syst. I Regul. Pap. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Spatial-Temporal Hybrid Neural Network With Computing-in-Memory Architecture
IEEE Transactions on Circuits and Systems I: Regular Papers ( IF 5.1 ) Pub Date : 2021-04-16 , DOI: 10.1109/tcsi.2021.3071956
Kangjun Bai , Lingjia Liu , Yang Yi

Deep learning (DL) has gained unprecedented success in many real-world applications. However, DL poses difficulties for efficient hardware implementation due to the needs of a complex gradient-based learning algorithm and the required high memory bandwidth for synaptic weight storage, especially in today’s data-intensive environment. Computing-in-memory (CIM) strategies have emerged as an alternative for realizing energy-efficient neuromorphic applications in silicon, reducing resources and energy required for neural computations. In this work, we exploit a CIM-based spatial-temporal hybrid neural network (STHNN) with a unique learning algorithm. To be specific, we integrate both multilayer perceptron and recurrent-based delay-dynamical system, making the network becomes linear separable while processing information in both spatial and temporal domains, better yet, reducing the memory bandwidth and hardware overhead through the CIM architecture. The prototype fabricated in 180 nm CMOS process is built of fully-analog components, yielding an average on-chip classification accuracy up to 86.9% on handprinted alphabet characters with a power consumption of 33 mW. Beyond that, through the handwritten digit database and the radio frequency fingerprinting dataset, software-based numerical evaluations offer $1.6 -to- 9.8 \times $ and $1.9 {-to-} 4.4 \times $ speedup, respectively, without significantly degrading its classification accuracy compared to the cutting-edge DL approaches.

中文翻译:

具有内存计算架构的时空混合神经网络

深度学习 (DL) 在许多实际应用中取得了前所未有的成功。然而,由于需要复杂的基于梯度的学习算法和突触权重存储所需的高内存带宽,尤其是在当今的数据密集型环境中,DL 为高效硬件实现带来了困难。内存计算 (CIM) 策略已成为在硅中实现高能效神经形态应用的替代方案,减少神经计算所需的资源和能源。在这项工作中,我们利用具有独特学习算法的基于 CIM 的时空混合神经网络 (STHNN)。具体来说,我们集成了多层感知器和基于循环的延迟动力学系统,使网络在处理空间和时间域中的信息时变得线性可分,更好的是,通过 CIM 架构减少内存带宽和硬件开销。采用 180 nm CMOS 工艺制造的原型由全模拟组件构建而成,对手写字母字符的平均片上分类精度高达 86.9%,功耗为 33 mW。除此之外,通过手写数字数据库和射频指纹数据集,基于软件的数值评估提供 9% 的手写字母字符,功耗为 33 mW。除此之外,通过手写数字数据库和射频指纹数据集,基于软件的数值评估提供 9% 的手写字母字符,功耗为 33 mW。除此之外,通过手写数字数据库和射频指纹数据集,基于软件的数值评估提供 $1.6 - 到 - 9.8 \times $ $1.9 {-to-} 4.4 \times $ 分别加速,与前沿的 DL 方法相比,不会显着降低其分类精度。
更新日期:2021-06-08
down
wechat
bug