当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
P-KDGAN: Progressive Knowledge Distillation with GANs for One-class Novelty Detection
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2020-07-14 , DOI: arxiv-2007.06963
Zhiwei Zhang, Shifeng Chen and Lei Sun

One-class novelty detection is to identify anomalous instances that do not conform to the expected normal instances. In this paper, the Generative Adversarial Networks (GANs) based on encoder-decoder-encoder pipeline are used for detection and achieve state-of-the-art performance. However, deep neural networks are too over-parameterized to deploy on resource-limited devices. Therefore, Progressive Knowledge Distillation with GANs (PKDGAN) is proposed to learn compact and fast novelty detection networks. The P-KDGAN is a novel attempt to connect two standard GANs by the designed distillation loss for transferring knowledge from the teacher to the student. The progressive learning of knowledge distillation is a two-step approach that continuously improves the performance of the student GAN and achieves better performance than single step methods. In the first step, the student GAN learns the basic knowledge totally from the teacher via guiding of the pretrained teacher GAN with fixed weights. In the second step, joint fine-training is adopted for the knowledgeable teacher and student GANs to further improve the performance and stability. The experimental results on CIFAR-10, MNIST, and FMNIST show that our method improves the performance of the student GAN by 2.44%, 1.77%, and 1.73% when compressing the computation at ratios of 24.45:1, 311.11:1, and 700:1, respectively.

中文翻译:

P-KDGAN:用于一类新颖性检测的 GAN 的渐进式知识蒸馏

一类新颖性检测是识别不符合预期正常实例的异常实例。在本文中,基于编码器-解码器-编码器管道的生成对抗网络 (GAN) 用于检测并实现最先进的性能。然而,深度神经网络过于参数化,无法部署在资源有限的设备上。因此,提出了使用 GAN 进行渐进式知识蒸馏 (PKDGAN) 来学习紧凑且快速的新颖性检测网络。P-KDGAN 是一种通过设计的蒸馏损失连接两个标准 GAN 的新颖尝试,用于将知识从教师转移到学生。知识蒸馏的渐进式学习是一种两步法,它不断提高学生 GAN 的性能,并取得比单步法更好的性能。第一步,学生 GAN 通过固定权重的预训练教师 GAN 的指导,完全从教师那里学习基础知识。第二步,对知识渊博的教师和学生 GAN 进行联合精细训练,以进一步提高性能和稳定性。在 CIFAR-10、MNIST 和 FMNIST 上的实验结果表明,当以 24.45:1、311.11:1 和 700 的比率压缩计算时,我们的方法将学生 GAN 的性能提高了 2.44%、1.77% 和 1.73% :1,分别。
更新日期:2020-07-15
down
wechat
bug