当前位置: X-MOL 学术J. Netw. Syst. Manag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Reinforcement Learning Based Active Queue Management for IoT Networks
Journal of Network and Systems Management ( IF 4.1 ) Pub Date : 2021-04-30 , DOI: 10.1007/s10922-021-09603-x
Minsu Kim , Muhammad Jaseemuddin , Alagan Anpalagan

Internet of Things (IoT) finds its applications in home, city and industrial settings. Current network is in transition to adopt fog/edge architecture for providing the capacity for IoT. However, in order to deal with the enormous amount of traffic generated by IoT devices and to reduce queuing delay, novel self-learning network management algorithms are required at fog/edge nodes. Active Queue Management (AQM) is a known intelligent packet dropping techique for differential QoS. In this paper, we propose a new AQM scheme based on Deep Reinforcement Learning (DRL) technique and introduce scaling factor in our reward function to achieve the trade-off between queuing delay and throughput. We choose Deep Q-Network (DQN) as a baseline for our scheme, and compare our approach with various AQM schemes by deploying them at the interface of fog/edge node. We simulated them by configuring different bandwidth and round trip time (RTT) values. The simulation results show that our scheme outperforms other AQM schemes in terms of delay and jitter while maintaining above-average throughput, and also verifies that AQM based on DRL is efficient in managing congestion.



中文翻译:

基于深度强化学习的物联网网络主动队列管理

物联网(IoT)在家庭,城市和工业环境中都有其应用。当前的网络正在过渡以采用雾/边缘架构来提供物联网的容量。但是,为了应对物联网设备产生的大量流量并减少排队延迟,在雾/边缘节点上需要新颖的自学习网络管理算法。活动队列管理(AQM)是一种已知的用于差分QoS的智能数据包丢弃技术。在本文中,我们提出了一种基于深度强化学习(DRL)技术的新AQM方案,并在奖励函数中引入了比例因子,以实现排队延迟与吞吐量之间的权衡。我们选择Deep Q-Network(DQN)作为我们的方案的基准,并通过将它们部署在雾/边缘节点的接口来将我们的方法与各种AQM方案进行比较。我们通过配置不同的带宽和往返时间(RTT)值来模拟它们。仿真结果表明,在保持高于平均水平的吞吐量的同时,我们的方案在延迟和抖动方面优于其他AQM方案,并且证明了基于DRL的AQM在管理拥塞方面是有效的。

更新日期:2021-05-03
down
wechat
bug