当前位置: X-MOL 学术Veh. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Dynamic mode selection and resource allocation approach for 5G-vehicle-to-everything (V2X) communication using asynchronous federated deep reinforcement learning method
Vehicular Communications ( IF 5.8 ) Pub Date : 2022-10-01 , DOI: 10.1016/j.vehcom.2022.100532
Iftikhar Rasheed

5G vehicle-to-everything (V2X) connectivity is crucial to enable future complex vehicular networking environment for enabling intelligent transportation systems (ITS). But, for mission critical applications like safety applications, unreliable vehicle-to-vehicle (V2V) connections and heavy signaling overheads in centralized resource distribution methods are becoming key obstacles. This work discusses the popular optimization issue of the selection of transmission mode and the allocation of resources blocks for 5G-V2X communication scenario. The stated problem is conceived as a Markov decision-making mechanism, and a Decentralized Deep reinforcement Learning (DRL) algorithm is presented to optimize the aggregate potential in terms of channel capacity of vehicle-to-infrastructure users while fulfilling the latency and reliability constraints of V2V communication link sets. In addition, considering training limitation of local DRL models, a two-timed synchronous federated DRL algorithm is used for making system robust. Therefore, we use two-timescale asynchronous federated deep reinforcement learning algorithm, i.e. Large-scale model and small- scale model. In large-scale model a graph-centered vehicle clustering is done to form the cluster of neighboring vehicles on a large timescale, whereas in small timescale model the vehicles in the similar cluster are used to train using robust global asynchronous federated deep reinforcement learning algorithm. The effects of outage threshold and vehicular density with respect to the network performance is presented. The simulation findings have shown that the proposed work outperforms the other previous state of the art works. The overall preeminence and convergence of the proposed work is verified.



中文翻译:

使用异步联合深度强化学习方法的 5G 车联网 (V2X) 通信的动态模式选择和资源分配方法

5G 车联网 (V2X) 连接对于实现未来复杂的车联网环境以实现智能交通系统 (ITS) 至关重要。但是,对于安全应用等任务关键型应用,不可靠的车对车 (V2V) 连接和集中式资源分配方法中的大量信号开销正成为关键障碍。这项工作讨论了 5G-V2X 通信场景中传输模式选择和资源块分配的流行优化问题。所述问题被设想为马尔可夫决策机制,并提出了一种分散式深度强化学习 (DRL) 算法,以优化车辆到基础设施用户的通道容量方面的聚合潜力,同时满足 V2V 通信链路集的延迟和可靠性约束。此外,考虑到局部 DRL 模型的训练限制,采用两次同步的联邦 DRL 算法使系统具有鲁棒性。因此,我们使用两个时间尺度的异步联合深度强化学习算法,即大规模模型和小规模模型。在大规模模型中,以图为中心的车辆聚类是为了在大时间尺度上形成相邻车辆的集群,而在小时间尺度模型中,相似集群中的车辆用于使用鲁棒的全局异步联邦深度强化学习算法进行训练。给出了中断阈值和车辆密度对网络性能的影响。模拟结果表明,所提出的工作优于其他先前的艺术作品。验证了拟议工作的整体优越性和收敛性。

更新日期:2022-10-01
down
wechat
bug