当前位置: X-MOL 学术IEEE Trans. Parallel Distrib. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Co-Active: A Workload-Aware Collaborative Cache Management Scheme for NVMe SSDs
IEEE Transactions on Parallel and Distributed Systems ( IF 5.6 ) Pub Date : 2021-01-15 , DOI: 10.1109/tpds.2021.3052028
Hui Sun , Shangshang Dai , Jianzhong Huang , Xiao Qin

When it comes to NAND Flash-based solid-state disks (SSDs), cache can narrow the performance gap between user-level I/Os and flash memory. Cache management schemes impose relentless impacts on the endurance and performance of flash memory. A vast majority of existing cache management techniques adopt a passive data-update style (e.g., GCaR, LCR), thereby undermining response times in burst I/O requests-based applications 1

Burst I/O requests must be served in a real-time manner. This type of I/O access pattern is prevalent in data-intensive workloads.

. To address this issue, we propose a collaborative active write-back cache management scheme, called Co-Active , customized for I/O access patterns and the usage status of a flash chip. We design a hot/cold separation module to determine whether data is cold or hot in workload. When a flash chip is idle, cold and dirty data in the cache is flushed into the idle flash chip to produce clean data. To curtail cache replacement cost, clean data are preferentially evicted amid the procedure of cache replacement. A maximum write-back threshold is configured according to the level of burst I/O requests in workload. This threshold is intended to avert redundant write I/Os flushing into flash memory, thereby boosting the endurance of flash memory. The experiments are conducted to validate the advantages of Co-Active in terms of average response time, write amplification, and erase count. The findings unveil that compared with the six popular cache management schemes (LRU, CFLRU, GCaR_CFLRU, LCR, and MQSim), Co-Active (1) slashes the average response time by up to 83.89 percent with an average of 32.7 percent; (2) drives up the performance cliff degree by up to 76.4 percent with an average of 42.3 percent; and (3) improves write amplification rate by up to 60.5 percent with an average of 5.4 percent.


中文翻译:

共同参与:针对NVMe SSD的工作负载感知协作缓存管理方案

对于基于NAND闪存的固态磁盘(SSD),缓存可以缩小用户级I / O与闪存之间的性能差距。高速缓存管理方案对闪存的耐用性和性能产生了不小的影响。绝大多数现有的缓存管理技术采用被动数据更新样式(例如,GCaR,LCR),从而缩短了基于突发I / O请求的应用程序中的响应时间 1个

突发I / O请求必须实时处理。这种I / O访问模式在数据密集型工作负载中很普遍。

。为了解决这个问题,我们提出了一种协作的主动回写式缓存管理方案,称为共同参与 ,针对I / O访问模式和闪存芯片的使用状态进行了定制。我们设计一个冷热分离模块确定工作负载中数据是冷还是热。当闪存芯片处于空闲状态时,缓存中的冷数据和脏数据将被刷新到空闲闪存芯片中,以产生干净的数据。为了减少缓存替换成本,优先在缓存替换过程中清除干净数据。根据工作负载中突发I / O请求的级别配置最大回写阈值。此阈值旨在避免将冗余的写I / O刷新到闪存中,从而提高闪存的耐用性。进行实验是为了验证Co-Active在平均响应时间,写入放大和擦除次数方面的优势。调查结果表明,与六种流行的缓存管理方案(LRU,CFLRU,GCaR_CFLRU,LCR和MQSim)相比,Co-Active(1)将平均响应时间缩短了83%。89%,平均为32.7%;(2)将性能下降程度提高76.4%,平均提高42.3%;(3)将写入放大率提高60.5%,平均提高5.4%。
更新日期:2021-02-05
down
wechat
bug