当前位置: X-MOL 学术VLDB J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Micro-architectural analysis of in-memory OLTP: Revisited
The VLDB Journal ( IF 4.2 ) Pub Date : 2021-03-31 , DOI: 10.1007/s00778-021-00663-8
Utku Sirin , Pınar Tözün , Danica Porobic , Ahmad Yasin , Anastasia Ailamaki

Micro-architectural behavior of traditional disk-based online transaction processing (OLTP) systems has been investigated extensively over the past couple of decades. Results show that traditional OLTP systems mostly under-utilize the available micro-architectural resources. In-memory OLTP systems, on the other hand, process all the data in main-memory and, therefore, can omit the buffer pool. Furthermore, they usually adopt more lightweight concurrency control mechanisms, cache-conscious data structures, and cleaner codebases since they are usually designed from scratch. Hence, we expect significant differences in micro-architectural behavior when running OLTP on platforms optimized for in-memory processing as opposed to disk-based database systems. In particular, we expect that in-memory systems exploit micro-architectural features such as instruction and data caches significantly better than disk-based systems. This paper sheds light on the micro-architectural behavior of in-memory database systems by analyzing and contrasting it to the behavior of disk-based systems when running OLTP workloads. The results show that, despite all the design changes, in-memory OLTP exhibits very similar micro-architectural behavior to disk-based OLTP: more than half of the execution time goes to memory stalls where instruction cache misses or the long-latency data misses from the last-level cache (LLC) are the dominant factors in the overall execution time. Even though ground-up designed in-memory systems can eliminate the instruction cache misses, the reduction in instruction stalls amplifies the impact of LLC data misses. As a result, only 30% of the CPU cycles are used to retire instructions, and 70% of the CPU cycles are wasted to stalls for both traditional disk-based and new generation in-memory OLTP.



中文翻译:

内存中OLTP的微体系结构分析:重新探讨

在过去的几十年中,对基于磁盘的传统在线事务处理(OLTP)系统的微体系结构行为进行了广泛的研究。结果表明,传统的OLTP系统大多未充分利用可用的微体系结构资源。另一方面,内存中OLTP系统处理主内存中的所有数据,因此可以省略缓冲池。此外,由于它们通常是从头开始设计的,因此它们通常采用更轻量级的并发控制机制,注重缓存的数据结构和更简洁的代码库。因此,与基于磁盘的数据库系统相比,在针对内存处理进行了优化的平台上运行OLTP时,我们预计微体系结构行为会出现显着差异。尤其是,我们期望内存系统比基于磁盘的系统更好地利用微体系结构功能,例如指令和数据缓存。本文通过分析并与运行OLTP工作负载时基于磁盘的系统的行为进行对比,从而阐明了内存数据库系统的微体系结构行为。结果表明,尽管进行了所有设计更改,但内存中的OLTP仍表现出与基于磁盘的OLTP非常相似的微体系结构行为:一半以上的执行时间流向了内存停顿,其中指令高速缓存未命中或长时延数据未命中来自最后一级缓存(LLC)的数据是整个执行时间的主要因素。即使完全设计的内存系统可以消除指令高速缓存未命中的情况,指令停顿的减少放大了LLC数据丢失的影响。结果,只有30%的CPU周期用于引退指令,而70%的CPU周期则浪费在传统的基于磁盘的内存和新一代的内存OLTP中。

更新日期:2021-03-31
down
wechat
bug