当前位置: X-MOL 学术Stat. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The recursive variational Gaussian approximation (R-VGA)
Statistics and Computing ( IF 1.6 ) Pub Date : 2021-12-20 , DOI: 10.1007/s11222-021-10068-w
Marc Lambert 1, 2 , Silvère Bonnabel 3, 4 , Francis Bach 5
Affiliation  

We consider the problem of computing a Gaussian approximation to the posterior distribution of a parameter given N observations and a Gaussian prior. Owing to the need of processing large sample sizes N, a variety of approximate tractable methods revolving around online learning have flourished over the past decades. In the present work, we propose to use variational inference to compute a Gaussian approximation to the posterior through a single pass over the data. Our algorithm is a recursive version of variational Gaussian approximation we have called recursive variational Gaussian approximation. We start from the prior, and for each observation, we compute the nearest Gaussian approximation in the sense of Kullback–Leibler divergence to the posterior given this observation. In turn, this approximation is considered as the new prior when incorporating the next observation. This recursive version based on a sequence of optimal Gaussian approximations leads to a novel implicit update scheme which resembles the online Newton algorithm and which is shown to boil down to the Kalman filter for Bayesian linear regression. In the context of Bayesian logistic regression, the implicit scheme may be solved, and the algorithm is shown to perform better than the extended Kalman filter, while being less computationally demanding than its sampling counterparts.



中文翻译:

递归变分高斯近似 (R-VGA)

我们考虑在给定N 个观测值和高斯先验的情况下计算参数后验分布的高斯近似的问题。由于需要处理大样本 N,在过去的几十年里,围绕在线学习的各种近似易处理的方法蓬勃发展。在目前的工作中,我们建议使用变分推理通过对数据的单次传递来计算后验的高斯近似。我们的算法是变分高斯近似的递归版本,我们称之为递归变分高斯近似。我们从先验开始,对于每个观察,我们计算在 Kullback-Leibler 散度意义上的最接近的高斯近似值到给定这个观察的后验。反过来,当合并下一个观察时,这个近似值被认为是新的先验。这个基于一系列最优高斯近似的递归版本导致了一种新的隐式更新方案,类似于在线牛顿算法,并被证明可以归结为贝叶斯线性回归的卡尔曼滤波器。在贝叶斯逻辑回归的背景下,隐式方案可以解决,并且该算法表现出比扩展卡尔曼滤波器更好的性能,同时比其采样对应物的计算要求更低。

更新日期:2021-12-20
down
wechat
bug