当前位置: X-MOL 学术Mach. Vis. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Semi-supervised learning using adversarial training with good and bad samples
Machine Vision and Applications ( IF 3.3 ) Pub Date : 2020-07-19 , DOI: 10.1007/s00138-020-01096-z
Wenyuan Li , Zichen Wang , Yuguang Yue , Jiayun Li , William Speier , Mingyuan Zhou , Corey Arnold

In this work, we investigate semi-supervised learning (SSL) for image classification using adversarial training. Previous results have illustrated that generative adversarial networks (GANs) can be used for multiple purposes in SSL . Triple-GAN, which aims to jointly optimize model components by incorporating three players, generates suitable image-label pairs to compensate for the lack of labeled data in SSL with improved benchmark performance. Conversely, Bad (or complementary) GAN optimizes generation to produce complementary data-label pairs and force a classifier’s decision boundary to lie between data manifolds. Although it generally outperforms Triple-GAN, Bad GAN is highly sensitive to the amount of labeled data used for training. Unifying these two approaches, we present unified-GAN (UGAN), a novel framework that enables a classifier to simultaneously learn from both good and bad samples through adversarial training. We perform extensive experiments on various datasets and demonstrate that UGAN: (1) achieves competitive performance among other GAN-based models, and (2) is robust to variations in the amount of labeled data used for training.

中文翻译:

半监督学习,使用对抗性训练对好坏样本进行分析

在这项工作中,我们研究了使用对抗训练进行图像分类的半监督学习(SSL)。先前的结果表明,生成对抗网络(GAN)可在SSL中用于多种目的。Triple-GAN旨在通过合并三个播放器来共同优化模型组件,它会生成合适的图像标签对,以通过改进基准性能来补偿SSL中缺少标签数据的情况。相反,不良(或互补)GAN会优化生成,以产生互补的数据标签对,并迫使分类器的决策边界位于数据流形之间。尽管Bad GAN的性能通常优于Triple GAN,但它对用于训练的标记数据量高度敏感。统一这两种方法,我们提出了统一GAN(UGAN),一个新颖的框架,可让分类器通过对抗训练同时从好样本和坏样本中学习。我们在各种数据集上进行了广泛的实验,并证明了UGAN:(1)在其他基于GAN的模型中取得了竞争性的成绩,并且(2)对于用于训练的标记数据量的变化具有鲁棒性。
更新日期:2020-07-19
down
wechat
bug