当前位置: X-MOL 学术IEEE Trans. Circuits Syst. I Regul. Pap. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hybrid Convolution Architecture for Energy-Efficient Deep Neural Network Processing
IEEE Transactions on Circuits and Systems I: Regular Papers ( IF 5.2 ) Pub Date : 2021-02-25 , DOI: 10.1109/tcsi.2021.3059882
Suchang Kim , Jihyuck Jo , In-Cheol Park

This paper presents a convolution process and its hardware architecture for energy-efficient deep neural network (DNN) processing. A DNN in general consists of a number of convolutional layers, and the number of input features involved in the convolution of a shallow layer is larger than that of kernels. As the layer deepens, however, the number of input features decreases, while that of kernels increases. The previous convolution architectures developed for enhancing energy efficiency have tried to reduce the memory accesses by increasing the reuse of the data once accessed from the memory. However, redundant memory accesses are still required as the change in the numbers of data has not been considered. We propose a hybrid convolution process that selects either a kernel-stay or feature-stay process by taking into account the numbers of data, and a forwarding technique to further reduce the memory accesses needed to store and load partial sums. The proposed convolution process is effective in maximizing data reuse, leading to an energy-efficient hybrid convolution architecture. Compared to the state-of-the- art architectures, the proposed architecture enhances the energy efficiency by up to 2.38 times in a 65nm CMOS process.

中文翻译:

高效节能的深度神经网络处理的混合卷积架构

本文介绍了一种卷积过程及其硬件架构,用于高效节能的深度神经网络(DNN)处理。DNN通常由许多卷积层组成,浅层卷积中涉及的输入特征的数量大于内核的卷积。但是,随着层的加深,输入要素的数量减少,而内核的数量增加。为提高能源效率而开发的以前的卷积体系结构试图通过增加一旦从存储器访问数据的重用性来减少存储器访问。但是,由于尚未考虑数据数量的变化,因此仍然需要冗余存储器访问。我们提出了一种混合卷积过程,该过程通过考虑数据数量来选择内核停留过程或特征停留过程。以及进一步减少存储和加载部分和所需的内存访问的转发技术。所提出的卷积过程在最大化数据重用方面是有效的,从而产生了节能混合卷积体系结构。与最新架构相比,该架构在65nm CMOS工艺中的能效提高了2.38倍。
更新日期:2021-04-20
down
wechat
bug