当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multimodal Variational Autoencoders for Semi-Supervised Learning: In Defense of Product-of-Experts
arXiv - CS - Machine Learning Pub Date : 2021-01-18 , DOI: arxiv-2101.07240
Svetlana Kutuzova, Oswin Krause, Douglas McCloskey, Mads Nielsen, Christian Igel

Multimodal generative models should be able to learn a meaningful latent representation that enables a coherent joint generation of all modalities (e.g., images and text). Many applications also require the ability to accurately sample modalities conditioned on observations of a subset of the modalities. Often not all modalities may be observed for all training data points, so semi-supervised learning should be possible. In this study, we evaluate a family of product-of-experts (PoE) based variational autoencoders that have these desired properties. We include a novel PoE based architecture and training procedure. An empirical evaluation shows that the PoE based models can outperform an additive mixture-of-experts (MoE) approach. Our experiments support the intuition that PoE models are more suited for a conjunctive combination of modalities while MoEs are more suited for a disjunctive fusion.

中文翻译:

用于半监督学习的多模式变分自动编码器:捍卫专家产品

多模式生成模型应该能够学习有意义的潜在表示形式,从而能够实现所有模式(例如图像和文本)的连贯生成。许多应用程序还需要具有根据模式子集的观察条件准确采样模式的能力。通常并非所有训练数据点都可以观察到所有模式,因此半监督学习应该是可能的。在这项研究中,我们评估了具有这些所需属性的基于专家产品(PoE)的系列可变自动编码器。我们包括一个新颖的基于PoE的体系结构和培训过程。实证评估表明,基于PoE的模型可以胜过专家加成(MoE)方法。
更新日期:2021-01-19
down
wechat
bug