当前位置: X-MOL 学术arXiv.cs.DS › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Towards Tight Bounds on the Sample Complexity of Average-reward MDPs
arXiv - CS - Data Structures and Algorithms Pub Date : 2021-06-13 , DOI: arxiv-2106.07046
Yujia Jin, Aaron Sidford

We prove new upper and lower bounds for sample complexity of finding an $\epsilon$-optimal policy of an infinite-horizon average-reward Markov decision process (MDP) given access to a generative model. When the mixing time of the probability transition matrix of all policies is at most $t_\mathrm{mix}$, we provide an algorithm that solves the problem using $\widetilde{O}(t_\mathrm{mix} \epsilon^{-3})$ (oblivious) samples per state-action pair. Further, we provide a lower bound showing that a linear dependence on $t_\mathrm{mix}$ is necessary in the worst case for any algorithm which computes oblivious samples. We obtain our results by establishing connections between infinite-horizon average-reward MDPs and discounted MDPs of possible further utility.

中文翻译:

趋向于平均奖励 MDP 样本复杂性的严格界限

我们证明了样本复杂性的新上限和下限,以找到无限水平平均奖励马尔可夫决策过程 (MDP) 的最优策略,并且可以访问生成模型。当所有策略的概率转移矩阵的混合时间最多为 $t_\mathrm{mix}$ 时,我们提供了使用 $\widetilde{O}(t_\mathrm{mix} \epsilon^{ -3})$(遗忘)每个状态-动作对的样本。此外,我们提供了一个下限,表明对于任何计算不经意样本的算法,在最坏的情况下,对 $t_\mathrm{mix}$ 的线性依赖是必要的。我们通过在无限水平平均奖励 MDP 和可能进一步效用的折扣 MDP 之间建立联系来获得我们的结果。
更新日期:2021-06-15
down
wechat
bug