当前位置: X-MOL 学术IEEE Trans. Veh. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Reinforcement Learning-based Adaptive Computation Offloading for MEC in Heterogeneous Vehicular Networks
IEEE Transactions on Vehicular Technology ( IF 6.8 ) Pub Date : 2020-07-01 , DOI: 10.1109/tvt.2020.2993849
Hongchang Ke , Jian Wang , Lingyue Deng , Yuming Ge , Hui Wang

The vehicular network needs efficient and reliable data communication technology to maintain low latency. It is very challenging to minimize the energy consumption and data communication delay while the vehicle is moving and wireless channels and bandwidth are time-varying. With the help of the emerging mobile edge computing (MEC) server, vehicles and roadside units (RSUs) can offload computing tasks to MEC associated with base station (BS). However, the environment for offloading tasks to MEC, e.g., wireless channel states and available bandwidth, is unstable. Therefore, ensuring the efficiency of computation offloading under such an unstable environment is a challenge. In this work, we design a task computation offloading model in a heterogeneous vehicular network; this model takes into account multiple stochastic tasks, the variety of wireless channels and bandwidth. To obtain the tradeoff between the cost of energy consumption and the cost of data transmission delay and avoid curse of dimensionality caused by the complexity of the large action space, we propose an adaptive computation offloading method based on deep reinforcement learning (ACORL) that can address the continuous action space. ACORL adds the Ornstein-Uhlenbeck (OU) noise vector to the action space with different factors for each action to validate the exploration. Multi transmission equipment can execute local processing and computation offloading to MEC. Nevertheless, ACORL considers the variety of wireless channels and available bandwidth between adjacent time slots. The numerical results illustrate that the proposed ACORL can effectively learn the optimal policy, which outperforms the Dueling DQN and greedy policy in the stochastic environment.

中文翻译:

异构车载网络中MEC的基于深度强化学习的自适应计算卸载

车载网络需要高效可靠的数据通信技术来保持低延迟。在车辆行驶且无线信道和带宽随时间变化的情况下,最大限度地减少能耗和数据通信延迟是非常具有挑战性的。借助新兴的移动边缘计算 (MEC) 服务器,车辆和路边单元 (RSU) 可以将计算任务卸载到与基站 (BS) 关联的 MEC。然而,将任务卸载到 MEC 的环境,例如无线信道状态和可用带宽,是不稳定的。因此,在这种不稳定的环境下确保计算卸载的效率是一个挑战。在这项工作中,我们设计了一个异构车载网络中的任务计算卸载模型;该模型考虑了多个随机任务,各种无线信道和带宽。为了在能耗成本和数据传输延迟成本之间进行权衡,避免大动作空间的复杂性造成的维数诅咒,我们提出了一种基于深度强化学习(ACORL)的自适应计算卸载方法,可以解决连续动作空间。ACORL 将 Ornstein-Uhlenbeck (OU) 噪声向量添加到动作空间,每个动作都有不同的因素,以验证探索。多传输设备可以执行本地处理和计算卸载到 MEC。然而,ACORL 考虑了无线信道的多样性和相邻时隙之间的可用带宽。数值结果表明,所提出的 ACORL 可以有效地学习最优策略,
更新日期:2020-07-01
down
wechat
bug