当前位置: X-MOL 学术arXiv.cs.AR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Addressing Variability in Reuse Prediction for Last-Level Caches
arXiv - CS - Hardware Architecture Pub Date : 2020-06-15 , DOI: arxiv-2006.08487
Priyank Faldu

Last-Level Cache (LLC) represents the bulk of a modern CPU processor's transistor budget and is essential for application performance as LLC enables fast access to data in contrast to much slower main memory. However, applications with large working set size often exhibit streaming and/or thrashing access patterns at LLC. As a result, a large fraction of the LLC capacity is occupied by dead blocks that will not be referenced again, leading to inefficient utilization of the LLC capacity. To improve cache efficiency, the state-of-the-art cache management techniques employ prediction mechanisms that learn from the past access patterns with an aim to accurately identify as many dead blocks as possible. Once identified, dead blocks are evicted from LLC to make space for potentially high reuse cache blocks. In this thesis, we identify variability in the reuse behavior of cache blocks as the key limiting factor in maximizing cache efficiency for state-of-the-art predictive techniques. Variability in reuse prediction is inevitable due to numerous factors that are outside the control of LLC. The sources of variability include control-flow variation, speculative execution and contention from cores sharing the cache, among others. Variability in reuse prediction challenges existing techniques in reliably identifying the end of a block's useful lifetime, thus causing lower prediction accuracy, coverage, or both. To address this challenge, this thesis aims to design robust cache management mechanisms and policies for LLC in the face of variability in reuse prediction to minimize cache misses, while keeping the cost and complexity of the hardware implementation low. To that end, we propose two cache management techniques, one domain-agnostic and one domain-specialized, to improve cache efficiency by addressing variability in reuse prediction.

中文翻译:

解决最后一级缓存重用预测的可变性

末级缓存 (LLC) 代表现代 CPU 处理器晶体管预算的大部分,对于应用程序性能至关重要,因为与慢得多的主存储器相比,LLC 能够快速访问数据。但是,具有大工作集大小的应用程序通常在 LLC 上表现出流和/或颠簸访问模式。结果,LLC 容量的很大一部分被不会被再次引用的死块占用,导致 LLC 容量的低效利用。为了提高缓存效率,最先进的缓存管理技术采用预测机制,从过去的访问模式中学习,目的是准确识别尽可能多的死块。一旦被识别,死块就会从 LLC 中逐出,为潜在的高重用缓存块腾出空间。在这篇论文中,我们将缓存块重用行为的可变性确定为最先进预测技术最大化缓存效率的关键限制因素。由于许多超出 LLC 控制的因素,重用预测的可变性是不可避免的。可变性的来源包括控制流变化、推测执行和来自共享缓存的内核的争用等。重用预测的可变性对现有技术在可靠地识别块的使用寿命结束方面提出了挑战,从而导致较低的预测精度、覆盖率或两者兼而有之。为了应对这一挑战,本论文旨在针对重用预测的可变性为 LLC 设计稳健的缓存管理机制和策略,以最大限度地减少缓存未命中,同时保持较低的硬件实现成本和复杂性。
更新日期:2020-06-16
down
wechat
bug