当前位置: X-MOL 学术ACM Trans. Storage › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Penalty- and Locality-aware Memory Allocation in Redis Using Enhanced AET
ACM Transactions on Storage ( IF 2.1 ) Pub Date : 2021-05-28 , DOI: 10.1145/3447573
Cheng Pan 1 , Xiaolin Wang 1 , Yingwei Luo 1 , Zhenlin Wang 2
Affiliation  

Due to large data volume and low latency requirements of modern web services, the use of an in-memory key-value (KV) cache often becomes an inevitable choice (e.g., Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and alleviates the load on background databases. Inheriting from the traditional hardware cache design, many existing KV cache systems still use recency-based cache replacement algorithms, e.g., least recently used or its approximations. However, the diversity of miss penalty distinguishes a KV cache from a hardware cache. Inadequate consideration of penalty can substantially compromise space utilization and request service time. KV accesses also demonstrate locality, which needs to be coordinated with miss penalty to guide cache management. In this article, we first discuss how to enhance the existing cache model, the Average Eviction Time model, so that it can adapt to modeling a KV cache. After that, we apply the model to Redis and propose pRedis, Penalty- and Locality-aware Memory Allocation in Redis, which synthesizes data locality and miss penalty, in a quantitative manner, to guide memory allocation and replacement in Redis. At the same time, we also explore the diurnal behavior of a KV store and exploit long-term reuse. We replace the original passive eviction mechanism with an automatic dump/load mechanism, to smooth the transition between access peaks and valleys. Our evaluation shows that pRedis effectively reduces the average and tail access latency with minimal time and space overhead. For both real-world and synthetic workloads, our approach delivers an average of 14.0%∼52.3% latency reduction over a state-of-the-art penalty-aware cache management scheme, Hyperbolic Caching (HC), and shows more quantitative predictability of performance. Moreover, we can obtain even lower average latency (1.1%∼5.5%) when dynamically switching policies between pRedis and HC.

中文翻译:

Redis 中使用增强型 AET 的惩罚和位置感知内存分配

由于现代 Web 服务的大数据量和低延迟要求,使用内存中键值 (KV) 缓存通常成为必然选择(例如 Redis 和 Memcached)。内存缓存保存热数据,减少请求延迟,减轻后台数据库的负载。继承传统的硬件缓存设计,许多现有的 KV 缓存系统仍然使用基于新近度的缓存替换算法,例如最近最少使用的算法或其近似算法。然而,未命中惩罚的多样性将 KV 缓存与硬件缓存区分开来。对惩罚的不充分考虑会严重影响空间利用率和请求服务时间。KV 访问还展示了局部性,需要与未命中惩罚相协调以指导缓存管理。在本文中,我们首先讨论如何增强现有的缓存模型,即平均驱逐时间模型,使其能够适应对 KV 缓存进行建模。之后,我们将模型应用到 Redis 并提出 pRedis,惩罚和位置感知内存分配在 Redis 中,它以量化的方式综合数据局部性和未命中惩罚,以指导 Redis 中的内存分配和替换。同时,我们还探索了 KV 存储的昼夜行为,并利用了长期重用。我们用自动转储/加载机制替换原来的被动驱逐机制,以平滑访问高峰和低谷之间的过渡。我们的评估表明,pRedis 以最小的时间和空间开销有效地降低了平均和尾部访问延迟。对于现实世界和合成工作负载,我们的方法比最先进的惩罚感知缓存管理方案双曲线缓存 (HC) 平均减少 14.0%~52.3% 的延迟,并显示出更多的定量可预测性表现。此外,我们可以获得更低的平均延迟(1.1%∼5.
更新日期:2021-05-28
down
wechat
bug