当前位置: X-MOL 学术arXiv.cs.NI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Reinforcement Learning for Dynamic Spectrum Sharing of LTE and NR
arXiv - CS - Networking and Internet Architecture Pub Date : 2021-02-22 , DOI: arxiv-2102.11176
Ursula Challita, David Sandberg

In this paper, a proactive dynamic spectrum sharing scheme between 4G and 5G systems is proposed. In particular, a controller decides on the resource split between NR and LTE every subframe while accounting for future network states such as high interference subframes and multimedia broadcast single frequency network (MBSFN) subframes. To solve this problem, a deep reinforcement learning (RL) algorithm based on Monte Carlo Tree Search (MCTS) is proposed. The introduced deep RL architecture is trained offline whereby the controller predicts a sequence of future states of the wireless access network by simulating hypothetical bandwidth splits over time starting from the current network state. The action sequence resulting in the best reward is then assigned. This is realized by predicting the quantities most directly relevant to planning, i.e., the reward, the action probabilities, and the value for each network state. Simulation results show that the proposed scheme is able to take actions while accounting for future states instead of being greedy in each subframe. The results also show that the proposed framework improves system-level performance.

中文翻译:

用于LTE和NR的动态频谱共享的深度强化学习

本文提出了一种主动的4G和5G系统之间的动态频谱共享方案。特别地,控制器在考虑到诸如高干扰子帧和多媒体广播单频网络(MBSFN)子帧之类的未来网络状态的同时,决定每个子帧在NR和LTE之间的资源划分。为了解决这个问题,提出了一种基于蒙特卡罗树搜索(MCTS)的深度强化学习(RL)算法。引入的深度RL体系结构是脱机训练的,因此控制器通过模拟从当前网络状态开始的随时间变化的假设带宽分配,来预测无线接入网络的未来状态序列。然后分配产生最佳奖励的动作顺序。这是通过预测与计划最直接相关的数量来实现的,即 每个网络状态的回报,行动概率和价值。仿真结果表明,该方案能够在考虑未来状态的同时采取行动,而不是在每个子帧中都过于贪婪。结果还表明,提出的框架提高了系统级性能。
更新日期:2021-02-23
down
wechat
bug