当前位置: X-MOL 学术IEEE Commun. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adaptive Reinforcement Learning Framework for NOMA-UAV Networks
IEEE Communications Letters ( IF 3.7 ) Pub Date : 2021-06-29 , DOI: 10.1109/lcomm.2021.3093385
Syed Khurram Mahmud , Yuanwei Liu , Yue Chen , Kok Keong Chai

We propose an adaptive reinforcement learning (A-RL) framework to maximize the sum-rate for non-orthogonal multiple access-unmanned aerial vehicle (NOMA-UAV) network. In this framework, Mamdani fuzzy inference system (MFIS) supervises a reinforcement learning (RL) policy based on multi-armed bandits (MAB). UAV as learning agent serves an internet of things (IoT) region. It manages an interference affected, channel block for NOMA uplink. Sum-rate, rate outage probability and average bit error rate (BER) for far-user are compared. Simulations reveal superior performance of A-RL, compared to non-adaptive RL counterpart. Joint maximum likelihood detection (JMLD) and successive interference cancellation (SIC) are also compared for BER performance and implementation complexity.

中文翻译:


NOMA-UAV 网络的自适应强化学习框架



我们提出了一种自适应强化学习(A-RL)框架,以最大化非正交多址无人机(NOMA-UAV)网络的总速率。在此框架中,Mamdani 模糊推理系统(MFIS)监督基于多臂老虎机(MAB)的强化学习(RL)策略。无人机作为学习代理服务于物联网 (IoT) 区域。它管理 NOMA 上行链路受干扰影响的信道块。比较远端用户的总速率、速率中断概率和平均误码率 (BER)。仿真结果表明,与非自适应强化学习相比,A-RL 具有更优越的性能。还比较了联合最大似然检测 (JMLD) 和连续干扰消除 (SIC) 的 BER 性能和实现复杂性。
更新日期:2021-06-29
down
wechat
bug