当前位置: X-MOL 学术IEEE Trans. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An Improved Logarithmic Multiplier for Energy-Efficient Neural Computing
IEEE Transactions on Computers ( IF 3.7 ) Pub Date : 2020-05-06 , DOI: 10.1109/tc.2020.2992113
Mohammad Saeed Ansari 1 , Bruce F. Cockburn 1 , Jie Han 1
Affiliation  

Multiplication is the most resource-hungry operation in neural networks (NNs). Logarithmic multipliers (LMs) simplify multiplication to shift and addition operations and thus reduce the energy consumption. Since implementing the logarithm in a compact circuit often introduces approximation, some accuracy loss is inevitable in LMs. However, this inaccuracy accords with the inherent error tolerance of NNs and their associated applications. This article proposes an improved logarithmic multiplier (ILM) that, unlike existing designs, rounds both inputs to their nearest powers of two by using a proposed nearest-one detector (NOD) circuit. Considering that the output of the NOD uses a one-hot representation, some entries in the truth table of a conventional adder cannot occur. Hence, a compact adder is designed for the reduced truth table. The 8×8 ILM achieves up to 17.48 percent saving in power consumption compared to a recent LM in the literature while being almost 8 percent more accurate. Moreover, the evaluation of the ILM for two benchmark NN workloads shows up to 21.85 percent reduction in energy consumption compared to the NNs implemented with other LMs. Interestingly, using the ILM increases the classification accuracy of the considered NNs by up to 1.4 percent compared to a NN implementation that uses exact multipliers.

中文翻译:

节能神经计算的改进对数乘法器

乘法是神经网络(NN)中最耗费资源的操作。对数乘法器(LM)简化了乘法运算以进行移位和加法运算,从而降低了能耗。由于在紧凑的电路中实现对数通常会引入近似值,因此在LM中不可避免地会损失一些精度。但是,这种不准确性与NN及其相关应用程序的固有误差容限相符。本文提出了一种改进的对数乘法器(ILM),与现有设计不同,该对数乘法器通过使用建议的最近一检测器(NOD)电路将两个输入均四舍五入到其最接近的2的幂。考虑到NOD的输出使用单次热表示,因此常规加法器的真值表中的某些条目不会出现。因此,为精简表设计了一个紧凑的加法器。与文献中的最新LM相比,8×8 ILM可以节省多达17.48%的功耗,同时精度要高出近8%。此外,与其他LM实施的NN相比,针对两个基准NN工作负载的ILM评估显示,能耗降低了21.85%。有趣的是,与使用精确乘法器的NN实现相比,使用ILM可以将考虑的NN的分类准确性提高多达1.4%。
更新日期:2020-05-06
down
wechat
bug