当前位置: X-MOL 学术IEEE Trans. Netw. Sci. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Resource Allocation for Delay-Sensitive Vehicle-to-Multi-Edges (V2Es) Communications in Vehicular Networks: A Multi-Agent Deep Reinforcement Learning Approach
IEEE Transactions on Network Science and Engineering ( IF 6.7 ) Pub Date : 2021-04-26 , DOI: 10.1109/tnse.2021.3075530
Jing Wu , Juzhen Wang , Qimei Chen , Zenghui Yuan , Pan Zhou , Xiumin Wang , Cai Fu

The rapid development of internet of vehicles (IoV) has recently led to the emergence of diverse intelligent vehicular applications such as automatic driving, auto navigation, and advanced driver assistance, etc. However, the current vehicular communication framework, such as vehicle-to-vehicle (V2V), vehicle-to-cloud (V2C), and vehicle-to-roadside infrastructure (V2I), still remain challenging in supporting these intelligent and delay-sensitive applications, due to its long communication latency or low computational capability. Besides that, the traditional vehicle network is prone to be unavailable because of the mobility with high-speed of the vehicles. To address these issues, this paper proposes a vehicle-to-multi-edges (V2Es) communication framework in vehicular networks. By utilizing the resource of edge nodes in the proximity, the emergency information or services of vehicles can be timely processed and completed, which improves the service quality of vehicles. Furthermore, we define a joint task offloading and edge caching problem, targeting optimizing both the latency of services and energy consumption of vehicles. Based on this, we propose a multi-agent reinforcement learning (RL) method to learn the dynamic communication status between vehicles and edge nodes, and make decisions on task offloading and edge caching. Finally, results of the simulation show that our proposal is able to learn the scheduling policy more quickly and effectively, and reduce the service latency by more than 10% on average.

中文翻译:


车辆网络中延迟敏感的车辆到多边 (V2E) 通信的资源分配:一种多代理深度强化学习方法



近年来,车联网(IoV)的快速发展催生了自动驾驶、自动导航、高级驾驶辅助等多种智能车辆应用的出现。然而,当前的车辆通信框架,例如车与车之间的通信框架由于通信延迟长或计算能力低,车辆(V2V)、车辆到云(V2C)和车辆到路边基础设施(V2I)在支持这些智能和延迟敏感的应用程序方面仍然具有挑战性。此外,由于车辆的高速移动性,传统的车载网络很容易出现不可用的情况。为了解决这些问题,本文提出了车辆网络中的车辆到多边缘(V2E)通信框架。利用邻近边缘节点的资源,可以及时处理和完成车辆的紧急信息或服务,提高车辆的服务质量。此外,我们定义了联合任务卸载和边缘缓存问题,旨在优化服务延迟和车辆能耗。基于此,我们提出了一种多智能体强化学习(RL)方法来学习车辆和边缘节点之间的动态通信状态,并做出任务卸载和边缘缓存的决策。最后,仿真结果表明,我们的方案能够更快、更有效地学习调度策略,平均降低服务延迟10%以上。
更新日期:2021-04-26
down
wechat
bug