当前位置: X-MOL 学术arXiv.cs.GT › 论文详情
Multi-Agent Common Knowledge Reinforcement Learning
arXiv - CS - Computer Science and Game Theory Pub Date : 2018-10-27 , DOI: arxiv-1810.11702
Christian A. Schroeder de Witt; Jakob N. Foerster; Gregory Farquhar; Philip H. S. Torr; Wendelin Boehmer; Shimon Whiteson

Cooperative multi-agent reinforcement learning often requires decentralised policies, which severely limit the agents' ability to coordinate their behaviour. In this paper, we show that common knowledge between agents allows for complex decentralised coordination. Common knowledge arises naturally in a large number of decentralised cooperative multi-agent tasks, for example, when agents can reconstruct parts of each others' observations. Since agents an independently agree on their common knowledge, they can execute complex coordinated policies that condition on this knowledge in a fully decentralised fashion. We propose multi-agent common knowledge reinforcement learning (MACKRL), a novel stochastic actor-critic algorithm that learns a hierarchical policy tree. Higher levels in the hierarchy coordinate groups of agents by conditioning on their common knowledge, or delegate to lower levels with smaller subgroups but potentially richer common knowledge. The entire policy tree can be executed in a fully decentralised fashion. As the lowest policy tree level consists of independent policies for each agent, MACKRL reduces to independently learnt decentralised policies as a special case. We demonstrate that our method can exploit common knowledge for superior performance on complex decentralised coordination tasks, including a stochastic matrix game and challenging problems in StarCraft II unit micromanagement.
更新日期:2020-01-14

 

全部期刊列表>>
施普林格自然
最近合集,配们化学
欢迎访问IOP中国网站
GIANT
自然职场线上招聘会
ACS ES&T Engineering
ACS ES&T Water
屿渡论文,编辑服务
何川
苏昭铭
陈刚
姜涛
李闯创
复旦大学
刘立明
隐藏1h前已浏览文章
课题组网站
新版X-MOL期刊搜索和高级搜索功能介绍
ACS材料视界
天合科研
x-mol收录
上海纽约大学
曾林
天津大学
何振宇
史大永
吉林大学
卓春祥
张昊
刘冬生
试剂库存
down
wechat
bug