当前位置: X-MOL 学术IEEE ACM Trans. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Kelly Cache Networks
IEEE/ACM Transactions on Networking ( IF 3.0 ) Pub Date : 2020-04-14 , DOI: 10.1109/tnet.2020.2982863
Milad Mahdian , Armin Moharrer , Stratis Ioannidis , Edmund Yeh

We study networks of M/M/1 queues in which nodes act as caches that store objects. Exogenous requests for objects are routed towards nodes that store them; as a result, object traffic in the network is determined not only by demand but, crucially, by where objects are cached. We determine how to place objects in caches to attain a certain design objective, such as, e.g., minimizing network congestion or retrieval delays. We show that for a broad class of objectives, including minimizing both the expected network delay and the sum of network queue lengths, this optimization problem can be cast as an NP-hard submodular maximization problem. We show that so-called continuous greedy algorithm attains a ratio arbitrarily close to $1-1/e\approx 0.63$ using a deterministic estimation via a power series; this drastically reduces execution time over prior art, which resorts to sampling. Finally, we show that our results generalize, beyond M/M/1 queues, to networks of M/M/ $k$ and symmetric M/D/1 queues.

中文翻译:

凯利缓存网络

我们研究M / M / 1队列的网络,其中节点充当存储对象的缓存。对对象的外来请求被路由到存储对象的节点;结果,网络中的对象流量不仅取决于需求,而且还取决于对象的缓存位置。我们确定如何将对象放置在缓存中以实现某个设计目标,例如最大程度地减少网络拥塞或检索延迟。我们表明,对于广泛的目标类别(包括最小化预期的网络延迟和网络队列长度的总和),可以将此优化问题转换为NP硬子模最大化问题。我们证明了所谓的持续的贪婪 算法达到任意接近的比率 $ 1-1 / e \约0.63 $ 通过幂级数使用确定性估计;与采用采样的现有技术相比,这大大减少了执行时间。最后,我们表明,我们的结果将M / M / 1队列以外的内容推广到M / M / $ k $ 和对称的M / D / 1队列。
更新日期:2020-06-19
down
wechat
bug