当前位置: X-MOL 学术Cybern. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
When Does Communication Learning Need Hierarchical Multi-Agent Deep Reinforcement Learning
Cybernetics and Systems ( IF 1.1 ) Pub Date : 2019-11-07 , DOI: 10.1080/01969722.2019.1677335
Marie Ossenkopf 1 , Mackenzie Jorgensen 2 , Kurt Geihs 1
Affiliation  

Abstract Multi-agent systems need to communicate to coordinate a shared task. We show that a recurrent neural network (RNN) can learn a communication protocol for coordination, even if the actions to coordinate are performed steps after the communication phase. We show that a separation of tasks with different temporal scale is necessary for successful learning. We contribute a hierarchical deep reinforcement learning model for multi-agent systems that separates the communication and coordination task from the action picking through a hierarchical policy. We further on show, that a separation of concerns in communication is beneficial but not necessary. As a testbed, we propose the Dungeon Lever Game and we extend the Differentiable Inter-Agent Learning (DIAL) framework. We present and compare results from different model variations on the Dungeon Lever Game.

中文翻译:

通信学习什么时候需要分层多智能体深度强化学习

摘要 多代理系统需要通过通信来协调共享任务。我们表明,循环神经网络 (RNN) 可以学习用于协调的通信协议,即使要协调的动作是在通信阶段之后执行的步骤。我们表明,成功学习需要分离具有不同时间尺度的任务。我们为多智能体系统贡献了一个分层深度强化学习模型,该模型通过分层策略将通信和协调任务与动作选择分开。我们进一步表明,在通信中分离关注点是有益的,但不是必需的。作为测试平台,我们提出了 Dungeon Lever Game 并扩展了可微代理间学习 (DIAL) 框架。
更新日期:2019-11-07
down
wechat
bug