当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Generative Model without Prior Distribution Matching
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2020-09-23 , DOI: arxiv-2009.11016
Cong Geng, Jia Wang, Li Chen, Zhiyong Gao

Variational Autoencoder (VAE) and its variations are classic generative models by learning a low-dimensional latent representation to satisfy some prior distribution (e.g., Gaussian distribution). Their advantages over GAN are that they can simultaneously generate high dimensional data and learn latent representations to reconstruct the inputs. However, it has been observed that a trade-off exists between reconstruction and generation since matching prior distribution may destroy the geometric structure of data manifold. To mitigate this problem, we propose to let the prior match the embedding distribution rather than imposing the latent variables to fit the prior. The embedding distribution is trained using a simple regularized autoencoder architecture which preserves the geometric structure to the maximum. Then an adversarial strategy is employed to achieve a latent mapping. We provide both theoretical and experimental support for the effectiveness of our method, which alleviates the contradiction between topological properties' preserving of data manifold and distribution matching in latent space.

中文翻译:

没有先验分布匹配的生成模型

变分自编码器 (VAE) 及其变体是经典的生成模型,通过学习低维潜在表示来满足某些先验分布(例如,高斯分布)。它们相对于 GAN 的优势在于它们可以同时生成高维数据并学习潜在表示以重建输入。然而,已经观察到重建和生成之间存在权衡,因为匹配先验分布可能会破坏数据流形的几何结构。为了缓解这个问题,我们建议让先验匹配嵌入分布,而不是强加潜在变量来拟合先验。嵌入分布使用简单的正则化自动编码器架构进行训练,该架构最大限度地保留了几何结构。然后采用对抗策略来实现潜在映射。我们为我们的方法的有效性提供了理论和实验支持,这减轻了数据流形的拓扑属性保留与潜在空间中的分布匹配之间的矛盾。
更新日期:2020-09-24
down
wechat
bug