当前位置: X-MOL 学术IEEE Internet Things J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Heterogeneous Task Offloading and Resource Allocations via Deep Recurrent Reinforcement Learning in Partial Observable Multifog Networks
IEEE Internet of Things Journal ( IF 8.2 ) Pub Date : 7-15-2020 , DOI: 10.1109/jiot.2020.3009540
Jungyeon Baek , Georges Kaddoum

As wireless services and applications become more sophisticated and require faster and higher capacity networks, there is a need for an efficient management of the execution of increasingly complex tasks based on the requirements of each application. In this regard, fog computing enables the integration of virtualized servers into networks and brings cloud services closer to end devices. In contrast to the cloud server, the computing capacity of fog nodes is limited and thus a single fog node might not be capable of computing-intensive tasks. In this context, task offloading can be particularly useful at the fog nodes by selecting the suitable nodes and proper resource management while guaranteeing the Quality-of-Service (QoS) requirements of the users. This article studies the design of a joint task offloading and resource allocation control for heterogeneous service tasks in multifog nodes systems. This problem is formulated as a partially observable stochastic game, in which each fog node cooperates to maximize the aggregated local rewards while the nodes only have access to local observations. To deal with partial observability, we apply a deep recurrent Q-network (DRQN) approach to approximate the optimal value functions. The solution is then compared to a deep Q-network (DQN) and deep convolutional Q-network (DCQN) approach to evaluate the performance of different neural networks. Moreover, to guarantee the convergence and accuracy of the neural network, an adjusted exploration-exploitation method is adopted. Provided numerical results show that the proposed algorithm can achieve a higher average success rate and lower average overflow than baseline methods.

中文翻译:


通过部分可观察多雾网络中的深度循环强化学习实现异构任务卸载和资源分配



随着无线服务和应用变得更加复杂并且需要更快和更高容量的网络,需要根据每个应用的要求对日益复杂的任务的执行进行有效管理。在这方面,雾计算能够将虚拟化服务器集成到网络中,并使云服务更接近终端设备。与云服务器相比,雾节点的计算能力是有限的,因此单个雾节点可能无法执行计算密集型任务。在这种情况下,通过选择合适的节点和适当的资源管理,同时保证用户的服务质量(QoS)要求,任务卸载在雾节点上特别有用。本文研究了多雾节点系统中异构服务任务的联合任务卸载和资源分配控制的设计。该问题被表述为部分可观察的随机博弈,其中每个雾节点合作以最大化聚合的本地奖励,而节点只能访问本地观察。为了处理部分可观测性,我们应用深度循环 Q 网络(DRQN)方法来逼近最优值函数。然后将该解决方案与深度 Q 网络 (DQN) 和深度卷积 Q 网络 (DCQN) 方法进行比较,以评估不同神经网络的性能。此外,为了保证神经网络的收敛性和准确性,采用了调整后的探索-利用方法。数值结果表明,与基线方法相比,该算法可以获得更高的平均成功率和更低的平均溢出率。
更新日期:2024-08-22
down
wechat
bug