当前位置:
X-MOL 学术
›
arXiv.cs.NE
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
DeepShift: Towards Multiplication-Less Neural Networks
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2019-05-30 , DOI: arxiv-1905.13298 Mostafa Elhoushi, Zihao Chen, Farhan Shafiq, Ye Henry Tian, Joey Yiwei Li
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2019-05-30 , DOI: arxiv-1905.13298 Mostafa Elhoushi, Zihao Chen, Farhan Shafiq, Ye Henry Tian, Joey Yiwei Li
Deployment of convolutional neural networks (CNNs) in mobile environments,
their high computation and power budgets proves to be a major bottleneck.
Convolution layers and fully connected layers, because of their intense use of
multiplications, are the dominant contributer to this computation budget. This
paper proposes to tackle this problem by introducing two new operations:
convolutional shifts and fully-connected shifts, that replace multiplications
all together with bitwise shift and sign flipping instead. For inference, both
approaches may require only 6 bits to represent the weights. This family of
neural network architectures (that use convolutional shifts and fully-connected
shifts) are referred to as DeepShift models. We propose two methods to train
DeepShift models: DeepShift-Q that trains regular weights constrained to powers
of 2, and DeepShift-PS that trains the values of the shifts and sign flips
directly. Training the DeepShift versions of ResNet18 architecture from
scratch, we obtained accuracies of 92.33% on CIFAR10 dataset, and Top-1/Top-5
accuracies of 65.63%/86.33% on Imagenet dataset. Training the DeepShift version
of VGG16 on ImageNet from scratch, resulted in a drop of less than 0.3% in
Top-5 accuracy. Converting the pre-trained 32-bit floating point baseline model
of GoogleNet to DeepShift and training it for 3 epochs, resulted in a
Top-1/Top-5 accuracies of 69.87%/89.62% that are actually higher than that of
the original model. Further testing is made on various well-known CNN
architectures. Last but not least, we implemented the convolutional shifts and
fully-connected shift GPU kernels and showed a reduction in latency time of
25\% when inferring ResNet18 compared to an unoptimized multiplication-based
GPU kernels. The code is available online at
https://github.com/mostafaelhoushi/DeepShift.
中文翻译:
DeepShift:迈向无乘法神经网络
在移动环境中部署卷积神经网络 (CNN),其高计算和功率预算被证明是一个主要瓶颈。卷积层和全连接层,因为它们大量使用乘法,是这个计算预算的主要贡献者。本文建议通过引入两个新操作来解决这个问题:卷积移位和全连接移位,它们用按位移位和符号翻转代替乘法。对于推理,两种方法可能只需要 6 位来表示权重。这一系列神经网络架构(使用卷积移位和全连接移位)被称为 DeepShift 模型。我们提出了两种训练 DeepShift 模型的方法:DeepShift-Q 训练常规权重限制为 2 的幂,和 DeepShift-PS 直接训练移位和符号翻转的值。从头开始训练 ResNet18 架构的 DeepShift 版本,我们在 CIFAR10 数据集上获得了 92.33% 的准确率,在 Imagenet 数据集上获得了 65.63%/86.33% 的 Top-1/Top-5 准确率。从头开始在 ImageNet 上训练 DeepShift 版本的 VGG16,导致 Top-5 准确率下降不到 0.3%。将预训练的 GoogleNet 的 32 位浮点基线模型转换为 DeepShift 并训练 3 个 epochs,得到 69.87%/89.62% 的 Top-1/Top-5 准确率,实际上高于原始模型. 对各种著名的 CNN 架构进行了进一步测试。最后但并非最不重要的,我们实现了卷积移位和全连接移位 GPU 内核,与未优化的基于乘法的 GPU 内核相比,在推断 ResNet18 时,延迟时间减少了 25%。该代码可在 https://github.com/mostafaelhoushi/DeepShift 在线获得。
更新日期:2020-01-17
中文翻译:
DeepShift:迈向无乘法神经网络
在移动环境中部署卷积神经网络 (CNN),其高计算和功率预算被证明是一个主要瓶颈。卷积层和全连接层,因为它们大量使用乘法,是这个计算预算的主要贡献者。本文建议通过引入两个新操作来解决这个问题:卷积移位和全连接移位,它们用按位移位和符号翻转代替乘法。对于推理,两种方法可能只需要 6 位来表示权重。这一系列神经网络架构(使用卷积移位和全连接移位)被称为 DeepShift 模型。我们提出了两种训练 DeepShift 模型的方法:DeepShift-Q 训练常规权重限制为 2 的幂,和 DeepShift-PS 直接训练移位和符号翻转的值。从头开始训练 ResNet18 架构的 DeepShift 版本,我们在 CIFAR10 数据集上获得了 92.33% 的准确率,在 Imagenet 数据集上获得了 65.63%/86.33% 的 Top-1/Top-5 准确率。从头开始在 ImageNet 上训练 DeepShift 版本的 VGG16,导致 Top-5 准确率下降不到 0.3%。将预训练的 GoogleNet 的 32 位浮点基线模型转换为 DeepShift 并训练 3 个 epochs,得到 69.87%/89.62% 的 Top-1/Top-5 准确率,实际上高于原始模型. 对各种著名的 CNN 架构进行了进一步测试。最后但并非最不重要的,我们实现了卷积移位和全连接移位 GPU 内核,与未优化的基于乘法的 GPU 内核相比,在推断 ResNet18 时,延迟时间减少了 25%。该代码可在 https://github.com/mostafaelhoushi/DeepShift 在线获得。