当前位置: X-MOL 学术IEEE Micro › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ΔNN: Power-efficient Neural Network Acceleration using Differential Weights
IEEE Micro ( IF 3.6 ) Pub Date : 2019-01-01 , DOI: 10.1109/mm.2019.2948345
Hoda Mahdiani , Alireza Khadem , Azam Ghanbari , Mehdi Modarressi , Farima Fattahi , Masoud Daneshtalab

The enormous and ever-increasing complexity of state-of-the-art neural networks has impeded the deployment of deep learning on resource-limited embedded and mobile devices. To reduce the complexity of neural networks, this article presents $\Delta$ΔNN, a power-efficient architecture that leverages a combination of the approximate value locality of neuron weights and algorithmic structure of neural networks. $\Delta$ΔNN keeps each weight as its difference ($\Delta$Δ) to the nearest smaller weight: each weight reuses the calculations of the smaller weight, followed by a calculation on the $\Delta$Δ value to make up the difference. We also round up/down the $\Delta$Δ to the closest power of two numbers to further reduce complexity. The experimental results show that $\Delta$ΔNN boosts the average performance by 14%–37% and reduces the average power consumption by 17%–49% over some state-of-the-art neural network designs.

中文翻译:

ΔNN:使用微分权重的节能神经网络加速

最先进的神经网络巨大且不断增加的复杂性阻碍了深度学习在资源有限的嵌入式和移动设备上的部署。为了降低神经网络的复杂性,本文提出了 $\Delta$ΔNN,这是一种节能架构,它结合了神经元权重的近似值局部性和神经网络的算法结构。$\Delta$ΔNN 将每个权重作为其与最接近的较小权重的差值 ($\Delta$Δ):每个权重重复使用较小权重的计算,然后对 $\Delta$Δ 值进行计算以弥补区别。我们还将 $\Delta$Δ 向上/向下舍入到最接近的两个数字的幂,以进一步降低复杂性。
更新日期:2019-01-01
down
wechat
bug