当前位置: X-MOL 学术arXiv.cs.GT › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Zero-sum Stochastic Games with Posterior Sampling
arXiv - CS - Computer Science and Game Theory Pub Date : 2021-09-08 , DOI: arxiv-2109.03396
Mehdi Jafarnia-Jahromi, Rahul Jain, Ashutosh Nayyar

In this paper, we propose Posterior Sampling Reinforcement Learning for Zero-sum Stochastic Games (PSRL-ZSG), the first online learning algorithm that achieves Bayesian regret bound of $O(HS\sqrt{AT})$ in the infinite-horizon zero-sum stochastic games with average-reward criterion. Here $H$ is an upper bound on the span of the bias function, $S$ is the number of states, $A$ is the number of joint actions and $T$ is the horizon. We consider the online setting where the opponent can not be controlled and can take any arbitrary time-adaptive history-dependent strategy. This improves the best existing regret bound of $O(\sqrt[3]{DS^2AT^2})$ by Wei et. al., 2017 under the same assumption and matches the theoretical lower bound in $A$ and $T$.

中文翻译:

使用后验采样学习零和随机博弈

在本文中,我们提出了零和随机博弈的后采样强化学习(PSRL-ZSG),这是第一个在无限地平线上实现 $O(HS\sqrt{AT})$ 的贝叶斯后悔界的在线学习算法-sum 具有平均奖励标准的随机游戏。这里 $H$ 是偏置函数跨度的上限,$S$ 是状态数,$A$ 是联合动作的数量,$T$ 是视界。我们考虑对手无法控制的在线设置,并且可以采取任何任意的时间自适应历史依赖策略。这改进了 Wei 等人的 $O(\sqrt[3]{DS^2AT^2})$ 的最佳现有后悔界限。al., 2017 在相同的假设下并与 $A$ 和 $T$ 的理论下限相匹配。
更新日期:2021-09-09
down
wechat
bug