当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks
arXiv - CS - Machine Learning Pub Date : 2020-01-16 , DOI: arxiv-2001.05674
L\'eopold Cambier, Anahita Bhiwandiwalla, Ting Gong, Mehran Nekuii, Oguz H Elibol, Hanlin Tang

Training with larger number of parameters while keeping fast iterations is an increasingly adopted strategy and trend for developing better performing Deep Neural Network (DNN) models. This necessitates increased memory footprint and computational requirements for training. Here we introduce a novel methodology for training deep neural networks using 8-bit floating point (FP8) numbers. Reduced bit precision allows for a larger effective memory and increased computational speed. We name this method Shifted and Squeezed FP8 (S2FP8). We show that, unlike previous 8-bit precision training methods, the proposed method works out-of-the-box for representative models: ResNet-50, Transformer and NCF. The method can maintain model accuracy without requiring fine-tuning loss scaling parameters or keeping certain layers in single precision. We introduce two learnable statistics of the DNN tensors - shifted and squeezed factors that are used to optimally adjust the range of the tensors in 8-bits, thus minimizing the loss in information due to quantization.

中文翻译:

用于深度神经网络低精度训练的移位和压缩 8 位浮点格式

在保持快速迭代的同时使用大量参数进行训练是开发性能更好的深度神经网络 (DNN) 模型的越来越多采用的策略和趋势。这需要增加训练的内存占用和计算要求。在这里,我们介绍了一种使用 8 位浮点 (FP8) 数训练深度神经网络的新方法。降低位精度允许更大的有效内存和增加的计算速度。我们将此方法命名为 Shifted and Squeezed FP8 (S2FP8)。我们表明,与之前的 8 位精度训练方法不同,所提出的方法对于代表性模型是开箱即用的:ResNet-50、Transformer 和 NCF。该方法可以保持模型精度,而无需微调损失缩放参数或将某些层保持在单精度。
更新日期:2020-01-17
down
wechat
bug