当前位置: X-MOL 学术arXiv.cs.IT › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Realizing GANs via a Tunable Loss Function
arXiv - CS - Information Theory Pub Date : 2021-06-09 , DOI: arxiv-2106.05232
Gowtham R. Kurri, Tyler Sypherd, Lalitha Sankar

We introduce a tunable GAN, called $\alpha$-GAN, parameterized by $\alpha \in (0,\infty]$, which interpolates between various $f$-GANs and Integral Probability Metric based GANs (under constrained discriminator set). We construct $\alpha$-GAN using a supervised loss function, namely, $\alpha$-loss, which is a tunable loss function capturing several canonical losses. We show that $\alpha$-GAN is intimately related to the Arimoto divergence, which was first proposed by \"{O}sterriecher (1996), and later studied by Liese and Vajda (2006). We posit that the holistic understanding that $\alpha$-GAN introduces will have practical benefits of addressing both the issues of vanishing gradients and mode collapse.

中文翻译:

通过可调损失函数实现 GAN

我们引入了一个可调的 GAN,称为 $\alpha$-GAN,由 $\alpha \in (0,\infty]$ 参数化,它在各种 $f$-GAN 和基于积分概率度量的 GAN 之间进行插值(在约束鉴别器集下) . 我们使用监督损失函数构建 $\alpha$-GAN,即 $\alpha$-loss,这是一个捕获多个规范损失的可调损失函数。我们表明 $\alpha$-GAN 与 Arimoto 密切相关发散,它首先由 \"{O}sterriecher (1996) 提出,后来由 Liese 和 Vajda (2006) 研究。我们假设 $\alpha$-GAN 引入的整体理解将具有解决这两个问题的实际好处梯度消失和模式崩溃的问题。
更新日期:2021-06-10
down
wechat
bug