当前位置: X-MOL 学术IEEE Internet Things J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Throughput Maximization by Deep Reinforcement Learning With Energy Cooperation for Renewable Ultradense IoT Networks
IEEE Internet of Things Journal ( IF 10.6 ) Pub Date : 2020-06-16 , DOI: 10.1109/jiot.2020.3002936
Ya Li , Xiaohui Zhao , Hui Liang

Ultradense network (UDN) is considered as one of the key technologies for the explosive growth of mobile traffic demand on the Internet of Things (IoT). It enhances network capacity by deploying small base stations in large quantities, but it simultaneously causes great energy consumption. In this article, we use energy harvesting (EH) and energy cooperation technologies to maximize system throughput and save energy. Considering that the energy arrival process and channel information are not available a priori , we propose an optimal deep reinforcement learning (DRL) algorithm to solve this average throughput maximization problem over a finite horizon. We also propose a multiagent DRL method to solve the dimensionality problem caused by the expansion of the state and action dimensions. Finally, we compare these algorithms with two traditional algorithms, greedy algorithm and conservative algorithm. The numerical results show that the proposed algorithms are valid and effective in increasing system average throughput on the long term.

中文翻译:

通过深度强化学习和能源合作实现可再生超密集物联网网络的吞吐量最大化

超密集网络(UDN)被认为是物联网(IoT)上移动流量需求爆炸性增长的关键技术之一。它通过大量部署小型基站来增强网络容量,但同时又导致大量能耗。在本文中,我们使用能量收集(EH)和能源合作技术来最大化系统吞吐量并节省能源。考虑到能量到达过程和通道信息不可用先验 ,我们提出了一种最佳的深度强化学习(DRL)算法,以解决此有限范围内的平均吞吐量最大化问题。我们还提出了一种多主体DRL方法来解决由于状态和动作维数的扩展而引起的维数问题。最后,我们将这些算法与两个传统算法(贪婪算法和保守算法)进行比较。数值结果表明,所提出的算法长期有效地提高了系统的平均吞吐量。
更新日期:2020-06-16
down
wechat
bug