当前位置: X-MOL 学术Perform. Eval. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Prefetching and caching for minimizing service costs: Optimal and approximation strategies
Performance Evaluation ( IF 2.2 ) Pub Date : 2021-01-01 , DOI: 10.1016/j.peva.2020.102149
Guocong Quan , Atilla Eryilmaz , Jian Tan , Ness Shroff

Abstract Strategically prefetching data has been utilized in practice to improve caching performance. Apart from caching data items upon requests, they can be prefetched into the cache before requests actually occur. The caching and prefetching operations compete for the limited cache space, whose size is typically much smaller than the number of data items. A key question is how to design an optimal prefetching and caching policy, assuming that the future requests can be predicted to certain extend. This question is non-trivial even under an idealized assumption that the future requests are precisely known. To investigate this problem, we propose a cost-based service model. The objective is to find the optimal offline prefetching and caching policy that minimizes the accumulated cost for a given request sequence. By casting it as a min-cost flow problem, we are able to find the optimal policy for a data trace of length N in expected time O ( N 3 ∕ 2 ) via flow-based algorithms. However, this requires the entire trace for each request and cannot be applied in real time. To this end, we analytically characterize the optimal policy by obtaining an optimal cache eviction mechanism. We derive conditions under which proactive prefetching is a better choice than passive caching. Based on these insights, we propose a lightweight approximation policy that only exploits predictions in the near future. Moreover, the approximation policy can be applied in real time and processes the entire trace in O ( N ) expected time. We prove that the competitive ratio of the approximation policy is less than 2 . Extensive simulations verify its near-optimal performance, for both heavy and light-tailed popularity distributions.

中文翻译:

用于最小化服务成本的预取和缓存:最优和近似策略

摘要 在实践中战略性地预取数据已被用于提高缓存性能。除了根据请求缓存数据项外,它们还可以在请求实际发生之前预取到缓存中。缓存和预取操作争夺有限的缓存空间,其大小通常远小于数据项的数量。一个关键问题是如何设计一个最优的预取和缓存策略,假设未来的请求可以预测到一定程度。即使在未来请求是精确已知的理想化假设下,这个问题也不是无关紧要的。为了研究这个问题,我们提出了一种基于成本的服务模型。目标是找到最佳离线预取和缓存策略,以最小化给定请求序列的累积成本。通过将其转换为最小成本流问题,我们能够通过基于流的算法在预期时间 O ( N 3 ∕ 2 ) 内找到长度为 N 的数据轨迹的最佳策略。但是,这需要对每个请求进行完整跟踪,无法实时应用。为此,我们通过获得最佳缓存驱逐机制来分析表征最佳策略。我们推导出主动预取比被动缓存更好的选择的条件。基于这些见解,我们提出了一种轻量级近似策略,该策略仅在不久的将来利用预测。此外,近似策略可以实时应用,并在 O ( N ) 预期时间内处理整个轨迹。我们证明了近似策略的竞争比率小于 2 。广泛的模拟验证了其接近最佳的性能,
更新日期:2021-01-01
down
wechat
bug