当前位置: X-MOL 学术arXiv.cs.NI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
LFQ: Online Learning of Per-flow Queuing Policies using Deep Reinforcement Learning
arXiv - CS - Networking and Internet Architecture Pub Date : 2020-07-06 , DOI: arxiv-2007.02735
Maximilian Bachl, Joachim Fabini, Tanja Zseby

The increasing number of different, incompatible congestion control algorithms has led to an increased deployment of fair queuing. Fair queuing isolates each network flow and can thus guarantee fairness for each flow even if the flows' congestion controls are not inherently fair. So far, each queue in the fair queuing system either has a fixed, static maximum size or is managed by an Active Queue Management (AQM) algorithm like CoDel. In this paper we design an AQM mechanism (Learning Fair Qdisc (LFQ)) that dynamically learns the optimal buffer size for each flow according to a specified reward function online. We show that our Deep Learning based algorithm can dynamically assign the optimal queue size to each flow depending on its congestion control, delay and bandwidth. Comparing to competing fair AQM schedulers, it provides significantly smaller queues while achieving the same or higher throughput.

中文翻译:

LFQ:使用深度强化学习在线学习每流排队策略

越来越多的不同的、不兼容的拥塞控制算法导致公平排队的部署增加。公平排队将每个网络流隔离开来,因此即使流的拥塞控制本身并不公平,也可以保证每个流的公平性。到目前为止,公平排队系统中的每个队列要么具有固定的静态最大大小,要么由诸如 CoDel 之类的主动队列管理 (AQM) 算法管理。在本文中,我们设计了一种 AQM 机制(Learning Fair Qdisc (LFQ)),该机制根据指定的在线奖励函数动态学习每个流的最佳缓冲区大小。我们展示了我们基于深度学习的算法可以根据每个流的拥塞控制、延迟和带宽动态地为每个流分配最佳队列大小。与竞争公平的 AQM 调度程序相比,
更新日期:2020-10-16
down
wechat
bug