当前位置: X-MOL 学术arXiv.cs.ET › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Granularity Gap Problem: A Hurdle for Applying Approximate Memory to Complex Data Layout
arXiv - CS - Emerging Technologies Pub Date : 2021-01-26 , DOI: arxiv-2101.10605
Soramichi Akiyama, Ryota Shioya

The main memory access latency has not much improved for more than two decades while the CPU performance had been exponentially increasing until recently. Approximate memory is a technique to reduce the DRAM access latency in return of losing data integrity. It is beneficial for applications that are robust to noisy input and intermediate data such as artificial intelligence, multimedia processing, and graph processing. To obtain reasonable outputs from applications on approximate memory, it is crucial to protect critical data while accelerating accesses to non-critical data. We refer the minimum size of a continuous memory region that the same error rate is applied in approximate memory to as the approximation granularity. A fundamental limitation of approximate memory is that the approximation granularity is as large as a few kilo bytes. However, applications may have critical and non-critical data interleaved with smaller granularity. For example, a data structure for graph nodes can have pointers (critical) to neighboring nodes and its score (non-critical, depending on the use-case). This data structure cannot be directly mapped to approximate memory due to the gap between the approximation granularity and the granularity of data criticality. We refer to this issue as the granularity gap problem. In this paper, we first show that many applications potentially suffer from this problem. Then we propose a framework to quantitatively evaluate the performance overhead of a possible method to avoid this problem using known techniques. The evaluation results show that the performance overhead is non-negligible compared to expected benefit from approximate memory, suggesting that the granularity gap problem is a significant concern.

中文翻译:

粒度差距问题:将近似内存应用于复杂数据布局的障碍

在过去的二十多年中,主存储器访问延迟没有太大改善,而CPU性能却呈指数级增长。近似内存是一种在丢失数据完整性的情况下减少DRAM访问延迟的技术。对于对噪声输入和中间数据具有鲁棒性的应用程序(如人工智能,多媒体处理和图形处理)而言,这是有益的。为了从近似内存中的应用程序获得合理的输出,至关重要的是保护关键数据,同时加快对非关键数据的访问。我们将在近似存储器中应用相同错误率的连续存储区域的最小大小称为近似粒度。近似内存的基本限制是近似粒度高达几千字节。然而,应用程序可能以较小的粒度交错了关键数据和非关键数据。例如,图节点的数据结构可以具有指向相邻节点的指针(关键)及其得分(不关键,具体取决于使用情况)。由于近似粒度和数据临界粒度之间的差距,该数据结构无法直接映射到近似内存。我们将此问题称为粒度差距问题。在本文中,我们首先表明许多应用程序可能会遭受此问题的困扰。然后,我们提出了一个框架,用于使用已知技术定量评估避免此问题的可能方法的性能开销。评估结果表明,与近似内存的预期收益相比,性能开销是不可忽略的,
更新日期:2021-01-27
down
wechat
bug