当前位置: X-MOL 学术ETRI J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A low-cost compensated approximate multiplier for Bfloat16 data processing on convolutional neural network inference
ETRI Journal ( IF 1.4 ) Pub Date : 2021-07-26 , DOI: 10.4218/etrij.2020-0370
HyunJin Kim 1
Affiliation  

This paper presents a low-cost two-stage approximate multiplier for bfloat16 (brain floating-point) data processing. For cost-efficient approximate multiplication, the first stage implements Mitchell's algorithm that performs the approximate multiplication using only two adders. The second stage adopts the exact multiplication to compensate for the error from the first stage by multiplying error terms and adding its truncated result to the final output. In our design, the low-cost multiplications in both stages can reduce hardware costs significantly and provide low relative errors by compensating for the error from the first stage. We apply our approximate multiplier to the convolutional neural network (CNN) inferences, which shows small accuracy drops with well-known pre-trained models for the ImageNet database. Therefore, our design allows low-cost CNN inference systems with high test accuracy.

中文翻译:

用于卷积神经网络推理的 Bfloat16 数据处理的低成本补偿近似乘法器

本文介绍了一种用于 bfloat16(大脑浮点)数据处理的低成本两级近似乘法器。对于经济高效的近似乘法,第一阶段实施米切尔算法,该算法仅使用两个加法器执行近似乘法。第二阶段采用精确乘法,通过乘以误差项并将其截断结果添加到最终输出来补偿第一阶段的误差。在我们的设计中,两个阶段的低成本乘法可以显着降低硬件成本,并通过补偿第一阶段的误差来提供较低的相对误差。我们将我们的近似乘数应用于卷积神经网络 (CNN) 推理,这表明 ImageNet 数据库的众所周知的预训练模型的精度下降很小。所以,
更新日期:2021-09-24
down
wechat
bug