当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities
International Journal of Computer Vision ( IF 19.5 ) Pub Date : 2019-11-27 , DOI: 10.1007/s11263-019-01265-2
Guo-Jun Qi

In this paper, we present the Lipschitz regularization theory and algorithms for a novel Loss-Sensitive Generative Adversarial Network (LS-GAN). Specifically, it trains a loss function to distinguish between real and fake samples by designated margins, while learning a generator alternately to produce realistic samples by minimizing their losses. The LS-GAN further regularizes its loss function with a Lipschitz regularity condition on the density of real data, yielding a regularized model that can better generalize to produce new data from a reasonable number of training examples than the classic GAN. We will further present a Generalized LS-GAN (GLS-GAN) and show it contains a large family of regularized GAN models, including both LS-GAN and Wasserstein GAN, as its special cases. Compared with the other GAN models, we will conduct experiments to show both LS-GAN and GLS-GAN exhibit competitive ability in generating new images in terms of the Minimum Reconstruction Error (MRE) assessed on a separate test set. We further extend the LS-GAN to a conditional form for supervised and semi-supervised learning problems, and demonstrate its outstanding performance on image classification tasks.

中文翻译:

Lipschitz 密度上的损失敏感生成对抗网络

在本文中,我们介绍了用于新型损失敏感生成对抗网络 (LS-GAN) 的 Lipschitz 正则化理论和算法。具体来说,它训练损失函数以通过指定的边界区分真假样本,同时学习生成器以通过最小化损失来交替生成真实样本。LS-GAN 使用 Lipschitz 正则条件对真实数据的密度进一步正则化其损失函数,产生一个正则化模型,与经典 GAN 相比,该模型可以更好地泛化以从合理数量的训练示例中生成新数据。我们将进一步展示广义 LS-GAN (GLS-GAN),并展示它包含大量正则化 GAN 模型,包括 LS-GAN 和 Wasserstein GAN,作为其特例。与其他 GAN 模型相比,我们将进行实验,以证明 LS-GAN 和 GLS-GAN 在生成新图像方面表现出在单独测试集上评估的最小重建误差 (MRE) 方面的竞争力。我们进一步将 LS-GAN 扩展为用于监督和半监督学习问题的条件形式,并展示了其在图像分类任务上的出色表现。
更新日期:2019-11-27
down
wechat
bug