当前位置: X-MOL 学术Stat. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unbiased estimation of the gradient of the log-likelihood in inverse problems
Statistics and Computing ( IF 2.2 ) Pub Date : 2021-03-03 , DOI: 10.1007/s11222-021-09994-6
Ajay Jasra , Kody J. H. Law , Deng Lu

We consider the problem of estimating a parameter \(\theta \in \Theta \subseteq {\mathbb {R}}^{d_{\theta }}\) associated with a Bayesian inverse problem. Typically one must resort to a numerical approximation of gradient of the log-likelihood and also adopt a discretization of the problem in space and/or time. We develop a new methodology to unbiasedly estimate the gradient of the log-likelihood with respect to the unknown parameter, i.e. the expectation of the estimate has no discretization bias. Such a property is not only useful for estimation in terms of the original stochastic model of interest, but can be used in stochastic gradient algorithms which benefit from unbiased estimates. Under appropriate assumptions, we prove that our estimator is not only unbiased but of finite variance. In addition, when implemented on a single processor, we show that the cost to achieve a given level of error is comparable to multilevel Monte Carlo methods, both practically and theoretically. However, the new algorithm is highly amenable to parallel computation.



中文翻译:

反问题中对数似然梯度的无偏估计

我们考虑估计参数\(\ theta \ in \ Theta \ subseteq {\ mathbb {R}} ^ {d _ {\ theta}} \\)的问题与贝叶斯逆问题有关。通常,人们必须诉诸对数似然梯度的数值近似,并且还必须对问题在空间和/或时间上进行离散化。我们开发了一种新的方法来相对于未知参数无偏地估计对数似然率的梯度,即估计的期望值没有离散化偏差。这样的性质不仅对于根据感兴趣的原始随机模型进行估计有用,而且可以在受益于无偏估计的随机梯度算法中使用。在适当的假设下,我们证明了我们的估计量不仅无偏,而且具有有限的方差。此外,当在单个处理器上实现时,我们证明了达到给定级别的错误所需的成本可与多级蒙特卡洛方法相媲美,无论是在实践上还是在理论上。但是,新算法非常适合并行计算。

更新日期:2021-03-03
down
wechat
bug