当前位置: X-MOL 学术IEEE Trans. Parallel Distrib. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Distributed and Collective Deep Reinforcement Learning for Computation Offloading: A Practical Perspective
IEEE Transactions on Parallel and Distributed Systems ( IF 5.6 ) Pub Date : 2021-05-01 , DOI: 10.1109/tpds.2020.3042599
Xiaoyu Qiu , Weikun Zhang , Wuhui Chen , Zibin Zheng

Mobile edge computing (MEC) is a promising solution to support resource-constrained devices by offloading tasks to the edge servers. However, traditional approaches (e.g., linear programming and game-theory methods) for computation offloading mainly focus on the immediate performance, potentially leading to performance degradation in the long run. Recent breakthroughs regarding deep reinforcement learning (DRL) provide alternative methods, which focus on maximizing the cumulative reward. Nonetheless, there exists a large gap to deploy real DRL applications in MEC. This is because: 1) training a well-performed DRL agent typically requires data with large quantities and high diversity, and 2) DRL training is usually accompanied by huge costs caused by trial-and-error. To address this mismatch, we study the applications of DRL on the multi-user computation offloading problem from a more practical perspective. In particular, we propose a distributed and collective DRL algorithm called DC-DRL with several improvements: 1) a distributed and collective training scheme that assimilates knowledge from multiple MEC environments, which not only greatly increases data amount and diversity but also spreads the exploration costs, 2) an updating method called adaptive n-step learning, which can improve training efficiency without suffering from the high variance caused by distributed training, and 3) combining the advantages of deep neuroevolution and policy gradient to maximize the utilization of multiple environments and prevent the premature convergence. Lastly, evaluation results demonstrate the effectiveness of our proposed algorithm. Compared with the baselines, the exploration costs and final system costs are reduced by at least 43 and 9.4 percent, respectively.

中文翻译:

用于计算卸载的分布式和集体深度强化学习:实用视角

移动边缘计算 (MEC) 是一种很有前景的解决方案,通过将任务卸载到边缘服务器来支持资源受限的设备。然而,传统的计算卸载方法(例如,线性规划和博弈论方法)主要关注即时性能,从长远来看可能会导致性能下降。最近关于深度强化学习 (DRL) 的突破提供了替代方法,专注于最大化累积奖励。尽管如此,在 MEC 中部署真正的 DRL 应用程序还存在很大差距。这是因为:1) 训练一个表现良好的 DRL 代理通常需要大量和高度多样性的数据,2) DRL 训练通常伴随着试错所带来的巨大成本。为了解决这种不匹配,我们从更实际的角度研究了 DRL 在多用户计算卸载问题上的应用。特别是,我们提出了一种称为 DC-DRL 的分布式集体 DRL 算法,有以下几个改进:1)一种分布式集体训练方案,它吸收来自多个 MEC 环境的知识,不仅大大增加了数据量和多样性,而且分散了探索成本, 2) 一种称为自适应 n 步学习的更新方法,可以提高训练效率,而不会受到分布式训练带来的高方差的影响,以及 3) 结合深度神经进化和策略梯度的优点,最大限度地利用多种环境,防止过早收敛。最后,评估结果证明了我们提出的算法的有效性。
更新日期:2021-05-01
down
wechat
bug