当前位置: X-MOL 学术IEEE Commun. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Dynamic Computation Offloading With Energy Harvesting Devices: A Graph-Based Deep Reinforcement Learning Approach
IEEE Communications Letters ( IF 4.1 ) Pub Date : 2021-07-09 , DOI: 10.1109/lcomm.2021.3094842
Juan Chen , Zongling Wu

We study a joint partial offloading and resource allocations (JPORA) problem for the mobile edge computing (MEC) with energy harvesting (EH), where the number of MDs, computation task, and harvested energy of mobile device (MD) are highly dynamic. It is critical to acquire an algorithm that optimally adapts JPORA decisions. Recent studies employ deep deterministic policy gradient (DDPG) agent to tackle the JPORA problem. However, traditional DDPG cannot generalize well to different MEC network scale, due to the deep neural networks in DDPG can only extract latent representation from Euclidean data, with the characteristics of MEC network structural information ignored. To this end, by taking advantage of the graph-based relationship deduction ability form graph convolutional networks (GCN) and the self-evolution ability from the experience training of DDPG, we propose a centralized GCN-DDPG agent to learn making decisions for MDs, including offloading ratio, local computation capacity, and uplink transmission power. Experimental results show that the proposed GCN-DDPG provides significant performance improvement over a number of state-of-the-art DRL agents.

中文翻译:

使用能量收集设备进行动态计算卸载:一种基于图的深度强化学习方法

我们研究了带有能量收集 (EH) 的移动边缘计算 (MEC) 的联合部分卸载和资源分配 (JPORA) 问题,其中移动设备 (MD) 的 MD 数量、计算任务和收集的能量是高度动态的。获得最佳地适应 JPORA 决策的算法至关重要。最近的研究使用深度确定性策略梯度 (DDPG) 代理来解决 JPORA 问题。然而,传统的 DDPG 不能很好地泛化到不同的 MEC 网络规模,因为 DDPG 中的深度神经网络只能从欧几里德数据中提取潜在表示,忽略了 MEC 网络结构信息的特征。为此,通过利用图卷积网络 (GCN) 中基于图的关系推导能力和 DDPG 经验训练的自我进化能力,我们提出了一个集中式 GCN-DDPG 代理来学习为 MD 做出决策,包括卸载率,本地计算能力和上行传输功率。实验结果表明,与许多最先进的 DRL 代理相比,所提出的 GCN-DDPG 提供了显着的性能改进。
更新日期:2021-09-10
down
wechat
bug