当前位置: X-MOL 学术Stat. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Convergence rates for optimised adaptive importance samplers
Statistics and Computing ( IF 1.6 ) Pub Date : 2021-01-21 , DOI: 10.1007/s11222-020-09983-1
Ömer Deniz Akyildiz , Joaquín Míguez

Adaptive importance samplers are adaptive Monte Carlo algorithms to estimate expectations with respect to some target distribution which adapt themselves to obtain better estimators over a sequence of iterations. Although it is straightforward to show that they have the same \(\mathcal {O}(1/\sqrt{N})\) convergence rate as standard importance samplers, where N is the number of Monte Carlo samples, the behaviour of adaptive importance samplers over the number of iterations has been left relatively unexplored. In this work, we investigate an adaptation strategy based on convex optimisation which leads to a class of adaptive importance samplers termed optimised adaptive importance samplers (OAIS). These samplers rely on the iterative minimisation of the \(\chi ^2\)-divergence between an exponential family proposal and the target. The analysed algorithms are closely related to the class of adaptive importance samplers which minimise the variance of the weight function. We first prove non-asymptotic error bounds for the mean squared errors (MSEs) of these algorithms, which explicitly depend on the number of iterations and the number of samples together. The non-asymptotic bounds derived in this paper imply that when the target belongs to the exponential family, the \(L_2\) errors of the optimised samplers converge to the optimal rate of \(\mathcal {O}(1/\sqrt{N})\) and the rate of convergence in the number of iterations are explicitly provided. When the target does not belong to the exponential family, the rate of convergence is the same but the asymptotic \(L_2\) error increases by a factor \(\sqrt{\rho ^\star } > 1\), where \(\rho ^\star - 1\) is the minimum \(\chi ^2\)-divergence between the target and an exponential family proposal.



中文翻译:

优化的自适应重要性采样器的收敛速度

自适应采样器重要性是自适应蒙特卡洛算法来估计相对于其中一些目标分布的期望适应自身超过迭代的序列,以获得更好的估计。尽管可以直接表明它们具有与标准重要性采样器相同的\(\ mathcal {O}(1 / \ sqrt {N})\)收敛速度,其中N是蒙特卡洛采样的数量,但是自适应相对于迭代次数,重要性采样器尚未得到充分开发。在这项工作中,我们研究了基于凸优化的适应策略,该策略导致了一类自适应重要性采样器,称为优化的自适应重要性采样器(OAIS)。这些采样器依赖于指数族提案和目标之间\(\ chi ^ 2 \) -差异的迭代最小化。所分析的算法与自适应重要性采样器的类别密切相关,后者使权重函数的方差最小。我们首先证明这些算法的均方误差(MSE)的非渐近误差范围,该误差范围明显取决于迭代次数和样本数。本文得出的非渐近界线表明,当目标属于指数族时,优化采样器的\(L_2 \)误差收敛到\(\ mathcal {O}(1 / \ sqrt { N})\)并明确提供了迭代次数的收敛速度。当目标确实属于指数家族,收敛的速度是相同的,但渐进\(L_2 \)的一个因素误差增大\(\开方{\ RHO ^ \星}> 1 \) ,其中\( \ rho ^ \ star-1 \)是目标值和指数家庭提案之间的最小\(\ chi ^ 2 \) -差异。

更新日期:2021-01-21
down
wechat
bug