当前位置: X-MOL 学术Electron. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SCGN: novel generative model using the convergence of latent space by training
Electronics Letters ( IF 0.7 ) Pub Date : 2020-08-01 , DOI: 10.1049/el.2020.1333
H. Kim 1 , S.H. Jung 2
Affiliation  

Generative models such as variational autoencoders (VAEs) and generative adversarial networks (GANs) have been recently applied to various fields. However, the VAE and GAN models have blur and mode collapse problems, respectively. Here, the authors propose a novel generative model, self-converging generative network (SCGN), to address the issues. Self-converging means the convergence of latent vectors into themselves through being trained in pairs with training data, by which the SCGN can reconstruct all training data. In the authors' model, the latent vectors and weights of the generator are alternately trained. Specifically, the latent vectors are trained to follow a normal distribution, using a loss function derived from the Kullback-Leibler divergence and a pixel-wise loss. The weights of the generator are adjusted for the generator to produce training data by means of a pixel-wise loss. As a result, their SCGN did not fall into the mode collapse, which occurs in GANs, and made clearer images than VAEs thanks to no use of sampling. Moreover, the SCGN successfully learned the manifold of the dataset in the extensive experiments with CelebA.

中文翻译:

SCGN:通过训练使用潜在空间收敛的新型生成模型

最近,诸如变分自编码器 (VAE) 和生成对抗网络 (GAN) 之类的生成模型已应用于各个领域。然而,VAE 和 GAN 模型分别存在模糊和模式崩溃问题。在这里,作者提出了一种新的生成模型,即自收敛生成网络 (SCGN),以解决这些问题。自收敛是指通过与训练数据成对训练,将潜在向量收敛到自身,SCGN 可以重建所有训练数据。在作者的模型中,交替训练生成器的潜在向量和权重。具体来说,使用从 Kullback-Leibler 散度和像素级损失导出的损失函数,训练潜在向量遵循正态分布。生成器的权重被调整以通过逐像素损失来生成训练数据。结果,他们的 SCGN 没有陷入 GAN 中发生的模式崩溃,并且由于没有使用采样,因此比 VAE 制作的图像更清晰。此外,SCGN 在与 CelebA 的广泛实验中成功地学习了数据集的流形。
更新日期:2020-08-01
down
wechat
bug