当前位置: X-MOL 学术arXiv.cs.MA › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multi-Agent Interactions Modeling with Correlated Policies
arXiv - CS - Multiagent Systems Pub Date : 2020-01-04 , DOI: arxiv-2001.03415
Minghuan Liu, Ming Zhou, Weinan Zhang, Yuzheng Zhuang, Jun Wang, Wulong Liu, Yong Yu

In multi-agent systems, complex interacting behaviors arise due to the high correlations among agents. However, previous work on modeling multi-agent interactions from demonstrations is primarily constrained by assuming the independence among policies and their reward structures. In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents' policies, which can recover agents' policies that can regenerate similar interactions. Consequently, we develop a Decentralized Adversarial Imitation Learning algorithm with Correlated policies (CoDAIL), which allows for decentralized training and execution. Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators and outperforms state-of-the-art multi-agent imitation learning methods. Our code is available at \url{https://github.com/apexrl/CoDAIL}.

中文翻译:

具有相关策略的多代理交互建模

在多代理系统中,由于代理之间的高度相关性,会出现复杂的交互行为。然而,之前从演示中建模多智能体交互的工作主要受限于假设政策及其奖励结构之间的独立性。在本文中,我们将多智能体交互建模问题转化为多智能体模仿学习框架,通过逼近对手的策略对相关策略进行显式建模,这可以恢复可以重新生成相似交互的智能体策略。因此,我们开发了一种具有相关策略(CoDAIL)的分散式对抗性模仿学习算法,该算法允许分散式训练和执行。各种实验表明,CoDAIL 可以更好地重新生成靠近演示者的复杂交互,并且优于最先进的多智能体模仿学习方法。我们的代码位于 \url{https://github.com/apexrl/CoDAIL}。
更新日期:2020-06-12
down
wechat
bug