当前位置: X-MOL 学术Pattern Recogn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Quantization Generative Networks
Pattern Recognition ( IF 7.5 ) Pub Date : 2020-09-01 , DOI: 10.1016/j.patcog.2020.107338
Diwen Wan , Fumin Shen , Li Liu , Fan Zhu , Lei Huang , Mengyang Yu , Heng Tao Shen , Ling Shao

Abstract Equipped with powerful convolutional neural networks (CNNs), generative models have achieved tremendous success in various vision applications. However, deep generative networks suffer from high computational and memory costs in both model training and deployment. While many efforts have been devoted to accelerate discriminative models by quantization, effectively reducing the costs for deep generative models is more challenging and remains unexplored. In this work, we investigate applying quantization technology to deep generative models. We find that keeping as much information as possible for quantized activations is key to obtain high-quality generative models. With this in mind, we propose Deep Quantization Generative Networks (DQGNs) to effectively accelerate and compress deep generative networks. By expanding the dimensions of the quantization basis space, DQGNs can achieve lower quantization error and are highly adaptive to complex data distributions. Various experiments on two powerful frameworks (i.e., variational auto-encoders, and generative adversarial networks) and two practical applications (i.e., style transfer, and super-resolution) demonstrate our findings and the effectiveness of our proposed approach.

中文翻译:

深度量化生成网络

摘要 配备强大的卷积神经网络 (CNN) 的生成模型在各种视觉应用中取得了巨大成功。然而,深度生成网络在模型训练和部署方面都存在高计算和内存成本。虽然许多努力致力于通过量化来加速判别模型,但有效降低深度生成模型的成本更具挑战性,并且仍有待探索。在这项工作中,我们研究将量化技术应用于深度生成模型。我们发现为量化激活保留尽可能多的信息是获得高质量生成模型的关键。考虑到这一点,我们提出了深度量化生成网络(DQGNs)来有效地加速和压缩深度生成网络。通过扩展量化基础空间的维度,DQGNs 可以实现更低的量化误差,并且对复杂的数据分布具有很强的适应性。在两个强大的框架(即变分自动编码器和生成对抗网络)和两个实际应用(即样式转移和超分辨率)上的各种实验证明了我们的发现和我们提出的方法的有效性。
更新日期:2020-09-01
down
wechat
bug