当前位置: X-MOL 学术IEEE Trans. Wirel. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Reinforcement Learning Based Downlink Interference Control for Ultra-Dense Small Cells
IEEE Transactions on Wireless Communications ( IF 8.9 ) Pub Date : 2020-01-01 , DOI: 10.1109/twc.2019.2945951
Liang Xiao , Hailu Zhang , Yilin Xiao , Xiaoyue Wan , Sicong Liu , Li-Chun Wang , H. Vincent Poor

The dense deployment of small cells in 5G cellular networks raises the issue of controlling downlink inter-cell interference under time-varying channel states. In this paper, we propose a reinforcement learning based power control scheme to suppress downlink inter-cell interference and save energy for ultra-dense small cells. This scheme enables base stations to schedule the downlink transmit power without knowing the interference distribution and the channel states of the neighboring small cells. A deep reinforcement learning based interference control algorithm is designed to further accelerate learning for ultra-dense small cells with a large number of active users. Analytical convergence performance bounds including throughput, energy consumption, inter-cell interference, and the utility of base stations are provided and the computational complexity of our proposed scheme is discussed. Simulation results show that this scheme optimizes the downlink interference control performance after sufficient power control instances and significantly increases the network throughput with less energy consumption compared with a benchmark scheme.

中文翻译:

基于强化学习的超密集小基站下行干扰控制

5G蜂窝网络中小蜂窝的密集部署提出了在时变信道状态下控制下行小区间干扰的问题。在本文中,我们提出了一种基于强化学习的功率控制方案,以抑制下行链路小区间干扰并为超密集小小区节省能量。该方案使基站能够在不知道干扰分布和相邻小小区的信道状态的情况下调度下行链路发射功率。基于深度强化学习的干扰控制算法旨在进一步加速具有大量活跃用户的超密集小小区的学习。分析收敛性能界限,包括吞吐量、能耗、小区间干扰、并提供了基站的效用,并讨论了我们提出的方案的计算复杂性。仿真结果表明,与基准方案相比,该方案在足够的功率控制实例后优化了下行链路干扰控制性能,并以更少的能耗显着提高了网络吞吐量。
更新日期:2020-01-01
down
wechat
bug