当前位置: X-MOL 学术J. Am. Stat. Assoc. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Mini-Batch Metropolis–Hastings With Reversible SGLD Proposal
Journal of the American Statistical Association ( IF 3.7 ) Pub Date : 2020-09-14 , DOI: 10.1080/01621459.2020.1782222
Tung-Yu Wu 1 , Y. X. Rachel Wang 2 , Wing H. Wong 3
Affiliation  

Abstract

Traditional Markov chain Monte Carlo (MCMC) algorithms are computationally intensive and do not scale well to large data. In particular, the Metropolis–Hastings (MH) algorithm requires passing over the entire dataset to evaluate the likelihood ratio in each iteration. We propose a general framework for performing MH-MCMC using mini-batches of the whole dataset and show that this gives rise to approximately a tempered stationary distribution. We prove that the algorithm preserves the modes of the original target distribution and derive an error bound on the approximation with mild assumptions on the likelihood. To further extend the utility of the algorithm to high-dimensional settings, we construct a proposal with forward and reverse moves using stochastic gradient and show that the construction leads to reasonable acceptance probabilities. We demonstrate the performance of our algorithm in both low dimensional models and high dimensional neural network applications. Particularly in the latter case, compared to popular optimization methods, our method is more robust to the choice of learning rate and improves testing accuracy. Supplementary materials for this article are available online.



中文翻译:

具有可逆 SGLD 提案的小批量 Metropolis-Hastings

摘要

传统的马尔可夫链蒙特卡罗 (MCMC) 算法计算量大,不能很好地扩展到大数据。特别是,Metropolis-Hastings (MH) 算法需要传递整个数据集以评估每次迭代中的似然比。我们提出了一个使用整个数据集的小批量执行 MH-MCMC 的通用框架,并表明这会产生近似缓和的平稳分布。我们证明了该算法保留了原始目标分布的模式,并通过对似然性的温和假设得出了近似值的误差界限。为了进一步将算法的效用扩展到高维设置,我们使用随机梯度构造了一个具有正向和反向移动的提议,并表明该构造导致合理的接受概率。我们展示了我们的算法在低维模型和高维神经网络应用中的性能。特别是在后一种情况下,与流行的优化方法相比,我们的方法对学习率的选择更加鲁棒,提高了测试的准确性。本文的补充材料可在线获取。

更新日期:2020-09-14
down
wechat
bug