当前位置: X-MOL 学术IEEE Trans. Veh. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Sustainable Task Offloading in UAV Networks via Multi-Agent Reinforcement Learning
IEEE Transactions on Vehicular Technology ( IF 6.1 ) Pub Date : 2021-04-21 , DOI: 10.1109/tvt.2021.3074304
Alessio Sacco , Flavio Esposito , Guido Marchetto , Paolo Montuschi

The recent growth of IoT devices, along with edge computing, has revealed many opportunities for novel applications. Among them, Unmanned Aerial Vehicles (UAVs), which are deployed for surveillance and environmental monitoring, are attracting increasing attention. In this context, typical solutions must deal with events that may change the state of the network, providing a service that continuously maintains a high level of performance. In this paper, we address this problem by proposing a distributed architecture that leverages a Multi-Agent Reinforcement Learning (MARL) technique to dynamically offload tasks from UAVs to the edge cloud. Nodes of the system co-operate to jointly minimize the overall latency perceived by the user and the energy usage on UAVs by continuously learning from the environment the best action, which entails the decision of offloading and, in this case, the best transmission technology, i.e., Wi-Fi or cellular. Results validate our distributed architecture and show the effectiveness of the approach in reaching the above targets.

中文翻译:


通过多智能体强化学习实现无人机网络中的可持续任务卸载



最近物联网设备以及边缘计算的增长为新颖的应用带来了许多机会。其中,用于监视和环境监测的无人机(UAV)越来越受到关注。在这种情况下,典型的解决方案必须处理可能改变网络状态的事件,提供持续保持高水平性能的服务。在本文中,我们通过提出一种分布式架构来解决这个问题,该架构利用多代理强化学习(MARL)技术将任务从无人机动态卸载到边缘云。系统节点相互协作,通过不断从环境中学习最佳行动,共同最大限度地减少用户感知的整体延迟和无人机的能源使用,这需要做出卸载决策,在这种情况下,需要做出最佳传输技术,即 Wi-Fi 或蜂窝网络。结果验证了我们的分布式架构,并显示了该方法在实现上述目标方面的有效性。
更新日期:2021-04-21
down
wechat
bug