当前位置: X-MOL 学术SIAM/ASA J. Uncertain. Quantif. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multilevel Monte Carlo Estimation of the Expected Value of Sample Information
SIAM/ASA Journal on Uncertainty Quantification ( IF 2.1 ) Pub Date : 2020-09-30 , DOI: 10.1137/19m1284981
Tomohiko Hironaka , Michael B. Giles , Takashi Goda , Howard Thom

SIAM/ASA Journal on Uncertainty Quantification, Volume 8, Issue 3, Page 1236-1259, January 2020.
We study Monte Carlo estimation of the expected value of sample information (EVSI), which measures the expected benefit of gaining additional information for decision making under uncertainty. EVSI is defined as a nested expectation in which an outer expectation is taken with respect to one random variable $Y$ and an inner conditional expectation with respect to the other random variable $\theta$. Although the nested (Markov chain) Monte Carlo estimator has been often used in this context, a root-mean-square accuracy of $\varepsilon$ is achieved notoriously at a cost of $O(\varepsilon^{-2-1/\alpha})$, where $\alpha$ denotes the order of convergence of the bias and is typically between $1/2$ and $1$. In this article we propose a novel efficient Monte Carlo estimator of EVSI by applying a multilevel Monte Carlo (MLMC) method. Instead of fixing the number of inner samples for $\theta$ as done in the nested Monte Carlo estimator, we consider a geometric progression on the number of inner samples, which yields a hierarchy of estimators on the inner conditional expectation with increasing approximation levels. Based on an elementary telescoping sum, our MLMC estimator is given by a sum of the Monte Carlo estimates of the differences between successive approximation levels on the inner conditional expectation. We show, under a set of assumptions on decision and information models, that successive approximation levels are tightly coupled, which directly proves that our MLMC estimator improves the necessary computational cost to optimal $O(\varepsilon^{-2})$. Numerical experiments confirm the considerable computational savings as compared to the nested Monte Carlo estimator.


中文翻译:

样本信息期望值的多级蒙特卡洛估计

SIAM / ASA不确定性量化期刊,第8卷,第3期,第1236-1259页,2020年1月。
我们研究了样本信息的期望值(EVSI)的蒙特卡洛估计,该估计用于度量在不确定性下获得用于决策的其他信息的期望收益。EVSI被定义为嵌套期望,其中对一个随机变量$ Y $采取外部期望,而对另一随机变量$ \ theta $采取内部条件期望。尽管在这种情况下经常使用嵌套的(马尔可夫链)蒙特卡洛估计器,但众所周知,以$ O(\ varepsilon ^ {-2-1 / \ alpha})$,其中$ \ alpha $表示偏差收敛的顺序,通常在$ 1/2 $和$ 1 $之间。在本文中,我们通过应用多层蒙特卡洛(MLMC)方法提出了一种新颖的EVSI高效蒙特卡洛估计器。我们没有像嵌套蒙特卡洛估计中那样固定$ \ theta的内部样本数,而是考虑内部样本数的几何级数,这会随着近似水平的提高而产生内部条件期望值的估计量层次。基于基本伸缩总和,我们的MLMC估计器由内部条件期望的连续近似级别之间的差异的蒙特卡洛估计之和得出。我们在决策和信息模型的一组假设下证明,逐次逼近水平紧密耦合,这直接证明了我们的MLMC估计器将必要的计算成本提高到了最佳$ O(\ varepsilon ^ {-2})$。与嵌套蒙特卡洛估计器相比,数值实验证实了可观的计算量节省。
更新日期:2020-10-17
down
wechat
bug