当前位置: X-MOL 学术IEEE Wirel. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep-Reinforcement-Learning-Based Sustainable Energy Distribution for Wireless Communication
IEEE Wireless Communications ( IF 10.9 ) Pub Date : 2022-01-21 , DOI: 10.1109/mwc.015.2100177
Ghulam Muhammad 1 , M. Shamim Hossain 1
Affiliation  

Many countries and organizations have proposed smart city projects to address the exponential growth of the population by promoting and developing a new paradigm for maximizing electricity demand in cities. Since Internet of Things (IoT)-based systems are extensively used in smart cities where huge amounts of data are generated and distributed, it could be challenging to directly capture data from a composite environment and to offer precise control behavior in response. Proper scheduling of numerous energy devices to meet the need of users is a demand of the smart city. Deep reinforcement learning (DRL) is an emerging methodology that can yield successful control behavior for time-variant dynamic systems. This article proposes an efficient DRL-based energy scheduling approach that can effectively distribute the energy devices based on consumption and users' demand. First, a deep neural network classifies the energy devices currently available in a framework. The DRL then efficiently schedules the devices. Edge-cloud-coordinated DRL is shown to reduce the delay and cost of smart grid energy distribution.

中文翻译:


基于深度强化学习的无线通信可持续能量分配



许多国家和组织提出了智慧城市项目,通过促进和开发最大化城市电力需求的新模式来应对人口的指数增长。由于基于物联网 (IoT) 的系统广泛应用于生成和分发大量数据的智慧城市,因此直接从复合环境中捕获数据并提供精确的控制行为响应可能具有挑战性。对众多的能源设备进行合理的调度以满足用户的需求是智慧城市的需求。深度强化学习(DRL)是一种新兴的方法,可以为时变动态系统产生成功的控制行为。本文提出了一种基于DRL的高效能源调度方法,可以根据消耗和用户需求有效地分配能源设备。首先,深度神经网络对框架中当前可用的能源设备进行分类。 DRL 然后有效地调度设备。边缘-云协调的 DRL 被证明可以减少智能电网能源分配的延迟和成本。
更新日期:2022-01-21
down
wechat
bug