当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multiagent Reinforcement Learning With Heterogeneous Graph Attention Network.
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.4 ) Pub Date : 2023-10-05 , DOI: 10.1109/tnnls.2022.3215774
Wei Du 1 , Shifei Ding 1 , Chenglong Zhang 1 , Zhongzhi Shi 2
Affiliation  

Most recent research on multiagent reinforcement learning (MARL) has explored how to deploy cooperative policies for homogeneous agents. However, realistic multiagent environments may contain heterogeneous agents that have different attributes or tasks. The heterogeneity of the agents and the diversity of relationships cause the learning of policy excessively tough. To tackle this difficulty, we present a novel method that employs a heterogeneous graph attention network to model the relationships between heterogeneous agents. The proposed method can generate an integrated feature representation for each agent by hierarchically aggregating latent feature information of neighbor agents, with the importance of the agent level and the relationship level being entirely considered. The method is agnostic to specific MARL methods and can be flexibly integrated with diverse value decomposition methods. We conduct experiments in predator-prey and StarCraft Multiagent Challenge (SMAC) environments, and the empirical results demonstrate that the performance of our method is superior to existing methods in several heterogeneous scenarios.

中文翻译:

具有异构图注意网络的多智能体强化学习。

最近关于多智能体强化学习(MARL)的研究探索了如何为同质智能体部署合作策略。然而,现实的多代理环境可能包含具有不同属性或任务的异构代理。主体的异质性和关系的多样性导致策略的学习过于艰难。为了解决这个困难,我们提出了一种新颖的方法,该方法采用异构图注意网络来建模异构代理之间的关系。该方法可以通过分层聚合邻居代理的潜在特征信息来生成每个代理的集成特征表示,并完全考虑代理级别和关系级别的重要性。该方法与具体的 MARL 方法无关,可以灵活地与多种价值分解方法集成。我们在捕食者-猎物和星际争霸多智能体挑战(SMAC)环境中进行了实验,实证结果表明我们的方法在多种异构场景中的性能优于现有方法。
更新日期:2022-11-04
down
wechat
bug