当前位置: X-MOL 学术arXiv.cs.MA › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Emergent Discrete Message Communication for Cooperative Reinforcement Learning
arXiv - CS - Multiagent Systems Pub Date : 2021-02-24 , DOI: arxiv-2102.12550
Sheng Li, Yutai Zhou, Ross Allen, Mykel J. Kochenderfer

Communication is a important factor that enables agents work cooperatively in multi-agent reinforcement learning (MARL). Most previous work uses continuous message communication whose high representational capacity comes at the expense of interpretability. Allowing agents to learn their own discrete message communication protocol emerged from a variety of domains can increase the interpretability for human designers and other agents.This paper proposes a method to generate discrete messages analogous to human languages, and achieve communication by a broadcast-and-listen mechanism based on self-attention. We show that discrete message communication has performance comparable to continuous message communication but with much a much smaller vocabulary size.Furthermore, we propose an approach that allows humans to interactively send discrete messages to agents.

中文翻译:

学习紧急离散消息通信以进行协作强化学习

交流是使代理能够在多代理强化学习(MARL)中协同工作的重要因素。以前的大多数工作都使用连续的消息通信,其高表示能力是以可解释性为代价的。允许代理学习来自各种领域的自己的离散消息通信协议可以提高人类设计者和其他代理的可解释性。本文提出了一种生成类似于人类语言的离散消息并通过广播和广播实现通信的方法。基于自我注意的倾听机制。我们证明了离散消息通信的性能可与连续消息通信相媲美,但词汇量却小得多。
更新日期:2021-02-26
down
wechat
bug