当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 23.6 ) Pub Date : 2018-07-23 , DOI: 10.1109/tpami.2018.2858821
Takeru Miyato , Shin-Ichi Maeda , Masanori Koyama , Shin Ishii

We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only “virtually” adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.

中文翻译:

虚拟对抗训练:有监督和半监督学习的正则化方法

我们提出了一种基于虚拟对抗损失的新正则化方法:给定输入的条件标签分布的局部平滑度的新度量。虚拟对抗损失定义为围绕每个输入数据点的条件标签分布对局部扰动的鲁棒性。与对抗训练不同,我们的方法在没有标签信息的情况下定义了对抗方向,因此适用于半监督学习。因为平滑模型的方向只是“虚拟”对抗性的,所以我们称此方法为虚拟对抗性训练(VAT)。增值税的计算成本相对较低。对于神经网络,可以使用不超过两对的正向传播和反向传播来计算虚拟对抗损失的近似梯度。在我们的实验中 我们将增值税应用于多个基准数据集的有监督和半监督学习任务。通过对基于熵最小化原理的算法进行简单增强,我们的增值税可实现SVHN和CIFAR-10上半监督学习任务的最新性能。
更新日期:2019-07-02
down
wechat
bug