当前位置: X-MOL 学术arXiv.cs.DB › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
CoT: Decentralized Elastic Caches for Cloud Environments
arXiv - CS - Databases Pub Date : 2020-06-15 , DOI: arxiv-2006.08067
Victor Zakhary, Lawrence Lim, Divyakant Agrawal, Amr El Abbadi

Distributed caches are widely deployed to serve social networks and web applications at billion-user scales. This paper presents Cache-on-Track (CoT), a decentralized, elastic, and predictive caching framework for cloud environments. CoT proposes a new cache replacement policy specifically tailored for small front-end caches that serve skewed workloads. Front-end servers use a heavy hitter tracking algorithm to continuously track the top-k hot keys. CoT dynamically caches the hottest C keys out of the tracked keys. Our experiments show that CoT's replacement policy consistently outperforms the hit-rates of LRU, LFU, and ARC for the same cache size on different skewed workloads. Also, \algoname slightly outperforms the hit-rate of LRU-2 when both policies are configured with the same tracking (history) size. CoT achieves server size load-balance with 50\% to 93.75\% less front-end cache in comparison to other replacement policies.

中文翻译:

CoT:面向云环境的分散式弹性缓存

分布式缓存被广泛部署以服务于十亿用户规模的社交网络和 Web 应用程序。本文介绍了 Cache-on-Track (CoT),这是一种用于云环境的分散式、弹性和预测性缓存框架。CoT 提出了一种新的缓存替换策略,专门为服务倾斜工作负载的小型前端缓存量身定制。前端服务器使用重击跟踪算法来持续跟踪前 k 个热键。CoT 从跟踪的键中动态缓存最热的 C 键。我们的实验表明,对于相同的缓存大小,在不同的偏斜工作负载上,CoT 的替换策略始终优于 LRU、LFU 和 ARC 的命中率。此外,当两个策略都配置有相同的跟踪(历史)大小时,\algoname 的命中率略高于 LRU-2。
更新日期:2020-06-19
down
wechat
bug