当前位置: X-MOL 学术Opt. Mem. Neural Networks › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Exponential Discretization of Weights of Neural Network Connections in Pre-Trained Neural Network. Part II: Correlation Maximization
Optical Memory and Neural Networks Pub Date : 2020-10-08 , DOI: 10.3103/s1060992x20030042
M. M. Pushkareva , I. M. Karandashev

Abstract

In this article, we develop method of linear and exponential quantization of neural network weights. We improve it by means of maximizing correlations between the initial and quantized weights taking into account the weight density distribution in each layer. We perform the quantization after the neural network training without a subsequent post-training and compare our algorithm with linear and exponential quantization. The quality of the neural network VGG-16 is already satisfactory (top5 accuracy 76%) in the case of 3-bit exponential quantization. The ResNet50 and Xception neural networks show top5 accuracy at 4 bits 79% and 61%, respectively.



中文翻译:

预训练神经网络中神经网络连接权重的指数离散化。第二部分:相关性最大化

摘要

在本文中,我们开发了神经网络权重的线性和指数量化方法。考虑到每一层的重量密度分布,我们通过最大化初始重量和量化重量之间的相关性来改进它。我们在神经网络训练后执行量化,而无需后续训练,然后将我们的算法与线性和指数量化进行比较。在3位指数量化的情况下,神经网络VGG-16的质量已经令人满意(top5精度为76%)。ResNet50和Xception神经网络分别显示4位79%和61%的top5精度。

更新日期:2020-10-08
down
wechat
bug