当前位置: X-MOL 学术J. Grid Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Computing Offloading Resource Allocation Scheme Using Deep Reinforcement Learning in Mobile Edge Computing Systems
Journal of Grid Computing ( IF 5.5 ) Pub Date : 2021-07-22 , DOI: 10.1007/s10723-021-09568-w
Xuezhu Li 1
Affiliation  

Aiming at the problems of increased latency and energy consumption and decreased service quality caused by current vehicle networks, this paper proposes a computing offloading resource allocation strategy based on deep reinforcement learning in Internet of Vehicles. Firstly, the system architecture for Internet of Vehicles is designed, calculation model and communication model of computing offloading strategy are constructed. Then, the resource allocation problem in offloading process is studied for real-time energy-aware offloading scheme in mobile edge computing. Besides, considering the battery capacity of vehicle users, the remaining energy rate is utilized to redefine weighting factors to sense energy consumption in real time. Finally, with the shortest delay and smallest computational cost as optimization goals, Q-learning is used to achieve the optimization of offloading strategy, that is, the optimal allocation of communication and computing resources, and the best system security. The simulation results show that the delay of the proposed algorithm is 0.442 s when the computational complexity is 9000 cycles/byte, and the performance of the delay is improved compared with the other three algorithms.



中文翻译:

在移动边缘计算系统中使用深度强化学习的计算卸载资源分配方案

针对当前车联网导致的延迟和能耗增加以及服务质量下降的问题,提出一种基于深度强化学习的车联网计算卸载资源分配策略。首先设计了车联网系统架构,构建了计算卸载策略的计算模型和通信模型。然后,针对移动边缘计算中的实时能量感知卸载方案,研究了卸载过程中的资源分配问题。此外,考虑到车辆用户的电池容量,利用剩余能量率重新定义权重因子以实时感知能量消耗。最后,以最短的延迟和最小的计算成本为优化目标,Q-learning用于实现offloading策略的优化,即通信和计算资源的优化配置,以及最佳的系统安全性。仿真结果表明,该算法在计算复杂度为9000周期/字节时的延迟为0.442 s,与其他三种算法相比,延迟性能有所提高。

更新日期:2021-07-22
down
wechat
bug