当前位置: X-MOL 学术Optoelectron. Instrument. Proc. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Applicability of Minifloats for Efficient Calculations in Neural Networks
Optoelectronics, Instrumentation and Data Processing ( IF 0.5 ) Pub Date : 2020-01-01 , DOI: 10.3103/s8756699020010100
A. Yu. Kondrat’ev , A. I. Goncharenko

The possibility of the inference of neural networks on minifloats has been studied. Calculations using a float16 accumulator for intermediate computing were performed. Performance was tested on the GoogleNet, ResNet-50, and MobileNet-v2 convolutional neural network and the DeepSpeechv01 recurrent network. The experiments showed that the performance of these neural networks with 11-bit minifloats is not inferior to the performance of networks with the float32 standard type without additional training. The results indicate that minifloats can be used to design efficient computers for the inference of neural networks.

中文翻译:

Minifloats 在神经网络中高效计算的适用性

已经研究了神经网络对 minifloat 进行推理的可能性。执行使用 float16 累加器进行中间计算的计算。在 GoogleNet、ResNet-50 和 MobileNet-v2 卷积神经网络和 DeepSpeechv01 循环网络上测试了性能。实验表明,这些具有 11 位微型浮点数的神经网络的性能并不逊色于具有 float32 标准类型且无需额外训练的网络的性能。结果表明,微型浮点数可用于设计用于神经网络推理的高效计算机。
更新日期:2020-01-01
down
wechat
bug