当前位置: X-MOL 学术IEEE Trans. Cognit. Commun. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On Joint Offloading and Resource Allocation: A Double Deep Q-Network Approach
IEEE Transactions on Cognitive Communications and Networking ( IF 7.4 ) Pub Date : 2021-09-29 , DOI: 10.1109/tccn.2021.3116251
Fahime Khoramnejad , Melike Erol-Kantarci

Multi-access edge computing (MEC) is an important enabling technology for 5G and 6G networks. With MEC, mobile devices can offload their computationally heavy tasks to a nearby server which can be a simple node at a base station, a vehicle or another device. With the increasing number of devices, slices and multiple radio access technologies, the problem of task offloading is becoming an increasingly complex problem. Thus, traditional approaches experience limitations while machine learning algorithms emerge as promising methods. In this paper, we consider binary and partial offloading problems and aim to jointly find optimal decisions for offloading and resource allocation which maximize the number of computed bits while minimizing the energy consumption. This allows improved usage of uplink transmit power and local CPU resources. We propose the Deep Reinforcement Learning for Joint Resource Allocation and Offloading (DJROM) algorithm that uses the double deep Q-network approach and models UEs as agents. We compare the proposed approach with two other machine learning based techniques, namely, multi-agent deep Q-learning (MARL-DQL) and multi-agent deep Q network (MARL-DQN) under fixed and mobile scenarios. Our results show that, DJROM scheme enhances the efficiency better than the other compared algorithms.

中文翻译:


关于联合卸载和资源分配:双深度 Q 网络方法



多接入边缘计算(MEC)是5G和6G网络的重要使能技术。借助 MEC,移动设备可以将计算繁重的任务卸载到附近的服务器,该服务器可以是基站、车辆或其他设备上的简单节点。随着设备、切片和多种无线接入技术数量的不断增加,任务卸载问题正变得越来越复杂。因此,传统方法存在局限性,而机器学习算法则成为有前途的方法。在本文中,我们考虑二进制和部分卸载问题,旨在共同找到卸载和资源分配的最佳决策,从而最大化计算位数,同时最小化能耗。这可以改善上行链路传输功率和本地 CPU 资源的使用。我们提出了联合资源分配和卸载的深度强化学习 (DJROM) 算法,该算法使用双深度 Q 网络方法并将 UE 建模为代理。我们将所提出的方法与其他两种基于机器学习的技术进行了比较,即固定和移动场景下的多智能体深度 Q 学习(MARL-DQL)和多智能体深度 Q 网络(MARL-DQN)。我们的结果表明,DJROM 方案比其他比较算法更好地提高了效率。
更新日期:2021-09-29
down
wechat
bug