当前位置: X-MOL 学术Comput. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Computation offloading over multi-UAV MEC network: A distributed deep reinforcement learning approach
Computer Networks ( IF 5.6 ) Pub Date : 2021-09-02 , DOI: 10.1016/j.comnet.2021.108439
Dawei Wei 1, 2 , Jianfeng Ma 3, 4 , Linbo Luo 3 , Yunbo Wang 3 , Lei He 1 , Xinghua Li 3, 4
Affiliation  

Unmanned aerial vehicle (UAV)-assisted computation offloading allows mobile devices (MDs) to process computation-intensive and latency-sensitive tasks with limited or no-available infrastructures. To achieve long-term performance under changing environment, deep reinforcement-based methods have been applied to solve the UAV-assisted computation offloading problem. However, the deployment of multiple UAVs for computation offloading in mobile edge computing (MEC) network still faces the challenge of lacking flexible learning scheme to efficiently adjust computation offloading policy according to dynamic UAV mobility pattern and UAV failure. To this end, a distributed deep reinforcement learning (DRL)-based method with the cooperative exploring and prioritized experience replay (PER) is proposed in this paper. Our distributed exploring process achieves flexible learning scheme under UAV failure by allowing MDs to learning cost-efficient offloading policy cooperatively. Furthermore, PER allows MDs can explore the transitions with high TD-error, which can improve the performance under dynamic UAV mobility patterns. The efficiency of the proposed method is demonstrated by comparing with the existing computation offloading methods, and results show that the proposed method outperforms the compared methods in terms of convergence rate, energy-task efficiency and average processing time.



中文翻译:

多无人机 MEC 网络上的计算卸载:一种分布式深度强化学习方法

无人机 (UAV) 辅助计算卸载允许移动设备 (MD) 在有限或不可用的基础设施下处理计算密集型和延迟敏感的任务。为了在不断变化的环境下实现长期性能,基于深度强化的方法已被应用于解决无人机辅助计算卸载问题。然而,在移动边缘计算(MEC)网络中部署多个无人机进行计算卸载仍然面临着缺乏灵活的学习方案来根据动态无人机移动模式和无人机故障有效调整计算卸载策略的挑战。为此,本文提出了一种基于分布式深度强化学习(DRL)的协同探索和优先体验重放(PER)的方法。我们的分布式探索过程通过允许 MD 合作学习具有成本效益的卸载策略,在无人机故障下实现了灵活的学习方案。此外,PER 允许 MD 可以探索具有高 TD 误差的转换,这可以提高动态无人机移动模式下的性能。通过与现有计算卸载方法的比较证明了所提出方法的效率,结果表明,所提出的方法在收敛速度、能量任务效率和平均处理时间方面优于比较方法。

更新日期:2021-09-09
down
wechat
bug