当前位置: X-MOL 学术arXiv.cs.MA › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Cooperative and Stochastic Multi-Player Multi-Armed Bandit: Optimal Regret With Neither Communication Nor Collisions
arXiv - CS - Multiagent Systems Pub Date : 2020-11-08 , DOI: arxiv-2011.03896
S\'ebastien Bubeck, Thomas Budzinski, Mark Sellke

We consider the cooperative multi-player version of the stochastic multi-armed bandit problem. We study the regime where the players cannot communicate but have access to shared randomness. In prior work by the first two authors, a strategy for this regime was constructed for two players and three arms, with regret $\tilde{O}(\sqrt{T})$, and with no collisions at all between the players (with very high probability). In this paper we show that these properties (near-optimal regret and no collisions at all) are achievable for any number of players and arms. At a high level, the previous strategy heavily relied on a $2$-dimensional geometric intuition that was difficult to generalize in higher dimensions, while here we take a more combinatorial route to build the new strategy.

中文翻译:

合作随机多人多臂强盗:既不沟通也不冲突的最优遗憾

我们考虑随机多臂老虎机问题的合作多人版本。我们研究了玩家无法交流但可以访问共享随机性的机制。在前两位作者的先前工作中,该机制的策略是为两个玩家和三个手臂构建的,遗憾的是 $\tilde{O}(\sqrt{T})$,并且玩家之间根本没有碰撞(概率非常高)。在本文中,我们展示了这些属性(接近最佳的遗憾和根本没有碰撞)对于任何数量的玩家和手臂都是可以实现的。在较高的层面上,之前的策略严重依赖于 $2$ 维的几何直觉,在更高维度上难以泛化,而在这里我们采用更组合的路线来构建新策略。
更新日期:2020-11-10
down
wechat
bug