当前位置: X-MOL 学术Wirel. Commun. Mob. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Coordinated Control of Distributed Traffic Signal Based on Multiagent Cooperative Game
Wireless Communications and Mobile Computing ( IF 2.146 ) Pub Date : 2021-06-02 , DOI: 10.1155/2021/6693636
Zhenghua Zhang 1 , Jin Qian 1 , Chongxin Fang 1 , Guoshu Liu 2 , Quan Su 2
Affiliation  

In the adaptive traffic signal control (ATSC), reinforcement learning (RL) is a frontier research hotspot, combined with deep neural networks to further enhance its learning ability. The distributed multiagent RL (MARL) can avoid this kind of problem by observing some areas of each local RL in the complex plane traffic area. However, due to the limited communication capabilities between each agent, the environment becomes partially visible. This paper proposes multiagent reinforcement learning based on cooperative game (CG-MARL) to design the intersection as an agent structure. The method considers not only the communication and coordination between agents but also the game between agents. Each agent observes its own area to learn the RL strategy and value function, then concentrates the function from different agents through a hybrid network, and finally forms its own final function in the entire large-scale transportation network. The results show that the proposed method is superior to the traditional control method.

中文翻译:

基于多智能体合作博弈的分布式交通信号协调控制

在自适应交通信号控制(ATSC)中,强化学习(RL)是前沿研究热点,结合深度神经网络进一步增强其学习能力。分布式多智能体 RL(MARL)可以通过观察复杂平面交通区域中每个局部 RL 的一些区域来避免此类问题。然而,由于每个代理之间的通信能力有限,环境变得部分可见。本文提出了基于合作博弈(CG-MARL)的多智能体强化学习,将交叉点设计为智能体结构。该方法不仅考虑了代理之间的通信和协调,还考虑了代理之间的博弈。每个智能体观察自己的区域来学习强化学习策略和价值函数,然后集中注意力来自不同代理的功能通过混合网络,最终在整个大型交通网络中形成自己的最终功能。结果表明,所提出的方法优于传统的控制方法。
更新日期:2021-06-02
down
wechat
bug