当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learned Sorted Table Search and Static Indexes in Small Space: Methodological and Practical Insights via an Experimental Study
arXiv - CS - Information Retrieval Pub Date : 2021-07-19 , DOI: arxiv-2107.09480
Domenico Amato, Raffaele Giancarlo, Giosuè Lo Bosco

Sorted Table Search Procedures are the quintessential query-answering tool, still very useful, e.g, Search Engines (Google Chrome). Speeding them up, in small additional space with respect to the table being searched into, is still a quite significant achievement. Static Learned Indexes have been very successful in achieving such a speed-up, but leave open a major question: To what extent one can enjoy the speed-up of Learned Indexes while using constant or nearly constant additional space. By generalizing the experimental methodology of a recent benchmarking study on Learned Indexes, we shed light on this question, by considering two scenarios. The first, quite elementary, i.e., textbook code, and the second using advanced Learned Indexing algorithms and the supporting sophisticated software platforms. Although in both cases one would expect a positive answer, its achievement is not as simple as it seems. Indeed, our extensive set of experiments reveal a complex relationship between query time and model space. The findings regarding this relationship and the corresponding quantitative estimates, across memory levels, can be of interest to algorithm designers and of use to practitioners as well. As an essential part of our research, we introduce two new models that are of interest in their own right. The first is a constant space model that can be seen as a generalization of $k$-ary search, while the second is a synoptic {\bf RMI}, in which we can control model space usage.

中文翻译:

在小空间中学习排序表搜索和静态索引:通过实验研究的方法论和实践见解

Sorted Table Search Procedures 是典型的查询-回答工具,仍然非常有用,例如,搜索引擎(Google Chrome)。相对于正在搜索的表而言,在很小的额外空间中加速它们仍然是一项非常重要的成就。静态学习索引在实现这种加速方面非常成功,但留下了一个主要问题:在使用恒定或几乎恒定的额外空间的同时,在多大程度上可以享受学习索引的加速。通过概括最近关于学习索引的基准研究的实验方法,我们通过考虑两种情况阐明了这个问题。第一个是非常初级的,即教科书代码,第二个使用高级学习索引算法和支持的复杂软件平台。尽管在这两种情况下都希望得到肯定的答复,但其成就并不像看起来那么简单。事实上,我们大量的实验揭示了查询时间和模型空间之间的复杂关系。关于这种关系的发现以及跨内存级别的相应定量估计可能对算法设计者感兴趣,对从业者也有用。作为我们研究的重要组成部分,我们介绍了两个本身就很有趣的新模型。第一个是常量空间模型,可以看作是 $k$-ary 搜索的泛化,而第二个是概要{\bf RMI},我们可以在其中控制模型空间的使用。关于这种关系的发现以及跨内存级别的相应定量估计可能对算法设计者感兴趣,对从业者也有用。作为我们研究的重要组成部分,我们介绍了两个本身就很有趣的新模型。第一个是常量空间模型,可以看作是 $k$-ary 搜索的泛化,而第二个是概要{\bf RMI},我们可以在其中控制模型空间的使用。关于这种关系的发现以及跨内存级别的相应定量估计可能对算法设计者感兴趣,对从业者也有用。作为我们研究的重要组成部分,我们介绍了两个本身就很有趣的新模型。第一个是常量空间模型,可以看作是 $k$-ary 搜索的泛化,而第二个是概要{\bf RMI},我们可以在其中控制模型空间的使用。
更新日期:2021-07-21
down
wechat
bug