当前位置: X-MOL 学术Comput. Aided Civ. Infrastruct. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A deep reinforcement learning-based distributed connected automated vehicle control under communication failure
Computer-Aided Civil and Infrastructure Engineering ( IF 9.6 ) Pub Date : 2022-02-24 , DOI: 10.1111/mice.12825
Haotian Shi 1, 2 , Yang Zhou 1 , Xin Wang 3 , Sicheng Fu 1 , Siyuan Gong 4 , Bin Ran 1
Affiliation  

This paper proposes a deep reinforcement learning (DRL)-based distributed longitudinal control strategy for connected and automated vehicles (CAVs) under communication failure to stabilize traffic oscillations. Specifically, the signal-interference-plus-noise ratio-based vehicle-to-vehicle communication is incorporated into the DRL training environment to reproduce the realistic communication and time–space varying information flow topologies (IFTs). A dynamic information fusion mechanism is designed to smooth the high-jerk control signal caused by the dynamic IFTs. Based on that, each CAV controlled by the DRL-based agent was developed to receive the real-time downstream CAVs’ state information and take longitudinal actions to achieve the equilibrium consensus in the multi-agent system. Simulated experiments are conducted to tune the communication adjustment mechanism and further validate the control performance, oscillation dampening performance and generalization capability of our proposed algorithm.

中文翻译:

通信故障下基于深度强化学习的分布式联网自动车辆控制

本文提出了一种基于深度强化学习 (DRL) 的分布式纵向控制策略,用于在通信故障情况下连接和自动驾驶车辆 (CAV) 以稳定交通振荡。具体来说,将基于信号干扰加噪声比的车对车通信纳入 DRL 训练环境,以重现真实的通信和时空变化的信息流拓扑 (IFT)。设计了一种动态信息融合机制来平滑由动态 IFT 引起的高抖动控制信号。在此基础上,开发了由基于 DRL 的代理控制的每个 CAV,以接收实时下游 CAV 的状态信息并采取纵向行动以实现多代理系统中的均衡共识。
更新日期:2022-02-24
down
wechat
bug