当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Generative Zero-shot Network Quantization
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2021-01-21 , DOI: arxiv-2101.08430
Xiangyu He, Qinghao Hu, Peisong Wang, Jian Cheng

Convolutional neural networks are able to learn realistic image priors from numerous training samples in low-level image generation and restoration. We show that, for high-level image recognition tasks, we can further reconstruct "realistic" images of each category by leveraging intrinsic Batch Normalization (BN) statistics without any training data. Inspired by the popular VAE/GAN methods, we regard the zero-shot optimization process of synthetic images as generative modeling to match the distribution of BN statistics. The generated images serve as a calibration set for the following zero-shot network quantizations. Our method meets the needs for quantizing models based on sensitive information, \textit{e.g.,} due to privacy concerns, no data is available. Extensive experiments on benchmark datasets show that, with the help of generated data, our approach consistently outperforms existing data-free quantization methods.

中文翻译:

生成零散网络量化

卷积神经网络能够在低级图像生成和恢复中从众多训练样本中学习逼真的图像先验。我们表明,对于高级图像识别任务,我们可以利用内在的批归一化(BN)统计信息来进一步重建每个类别的“现实”图像,而无需任何训练数据。受到流行的VAE / GAN方法的启发,我们将合成图像的零镜头优化过程视为生成模型,以匹配BN统计信息的分布。生成的图像用作以下零镜头网络量化的校准集。由于出于隐私考虑,我们的方法满足了基于敏感信息\ textit {eg}量化模型的需求,因此没有可用数据。在基准数据集上进行的大量实验表明,
更新日期:2021-01-22
down
wechat
bug