当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Meta-Learning for Variational Inference
arXiv - CS - Machine Learning Pub Date : 2020-07-06 , DOI: arxiv-2007.02912
Ruqi Zhang, Yingzhen Li, Christopher De Sa, Sam Devlin, Cheng Zhang

Variational inference (VI) plays an essential role in approximate Bayesian inference due to its computational efficiency and broad applicability. Crucial to the performance of VI is the selection of the associated divergence measure, as VI approximates the intractable distribution by minimizing this divergence. In this paper we propose a meta-learning algorithm to learn the divergence metric suited for the task of interest, automating the design of VI methods. In addition, we learn the initialization of the variational parameters without additional cost when our method is deployed in the few-shot learning scenarios. We demonstrate our approach outperforms standard VI on Gaussian mixture distribution approximation, Bayesian neural network regression, image generation with variational autoencoders and recommender systems with a partial variational autoencoder.

中文翻译:

变分推理的元学习

由于其计算效率和广泛的适用性,变分推理 (VI) 在近似贝叶斯推理中起着至关重要的作用。VI 性能的关键是选择相关的散度度量,因为 VI 通过最小化这种散度来近似难以处理的分布。在本文中,我们提出了一种元学习算法来学习适合感兴趣任务的散度度量,从而使 VI 方法的设计自动化。此外,当我们的方法部署在少样本学习场景中时,我们无需额外成本即可学习变分参数的初始化。我们证明我们的方法在高斯混合分布近似、贝叶斯神经网络回归方面优于标准 VI,
更新日期:2020-07-07
down
wechat
bug