当前位置: X-MOL 学术Comput. Vis. Image Underst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial feature distribution alignment for semi-supervised learning
Computer Vision and Image Understanding ( IF 4.3 ) Pub Date : 2020-09-22 , DOI: 10.1016/j.cviu.2020.103109
Christoph Mayer , Matthieu Paul , Radu Timofte

Training deep neural networks with only a few labeled samples can lead to overfitting. This is problematic in semi-supervised learning where only a few labeled samples are available. In this paper, we show that a consequence of overfitting in SSL is feature distribution misalignment between labeled and unlabeled samples. Hence, we propose a new feature distribution alignment method. Our method is particularly effective when using only a small amount of labeled samples. We test our method on CIFAR-10, SVHN and LSUN. On SVHN we achieve a test error of 3.88% (250 labeled samples) and 3.39% (1000 labeled samples), which is close to the fully supervised model 2.89% (73k labeled samples). In comparison, the current SOTA achieves only 4.29% and 3.74%. On LSUN we achieve superior results than a state-of-the- art method even when using 100× less unlabeled samples (500 labeled samples). Finally, we provide a theoretical insight why feature distribution misalignment occurs and show that our method reduces it.



中文翻译:

用于半监督学习的对抗性特征分布对齐

仅用少量标记的样本训练深度神经网络可能会导致过度拟合。这在只有少量标记样本的半监督学习中是有问题的。在本文中,我们证明了SSL过度拟合的结果是标记和未标记样本之间的特征分布未对齐。因此,我们提出了一种新的特征分布对齐方法。仅使用少量标记样品时,我们的方法特别有效。我们在CIFAR-10,SVHN和LSUN上测试了我们的方法。在SVHN上,我们实现了3.88%(250个标记的样本)和3.39%(1000个标记的样本)的测试误差,接近完全监督的模型2.89%(73k标记的样本)。相比之下,当前的SOTA仅达到4.29%和3.74%。在LSUN上,即使使用100×较少的未标记样品(500个标记样品)。最后,我们提供了一个理论上的见解,为什么会发生特征分布不对齐的情况,并表明我们的方法可以减少这种情况。

更新日期:2020-10-08
down
wechat
bug