当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Embedding-based Zero-shot Retrieval through Query Generation
arXiv - CS - Information Retrieval Pub Date : 2020-09-22 , DOI: arxiv-2009.10270
Davis Liang, Peng Xu, Siamak Shakeri, Cicero Nogueira dos Santos, Ramesh Nallapati, Zhiheng Huang, Bing Xiang

Passage retrieval addresses the problem of locating relevant passages, usually from a large corpus, given a query. In practice, lexical term-matching algorithms like BM25 are popular choices for retrieval owing to their efficiency. However, term-based matching algorithms often miss relevant passages that have no lexical overlap with the query and cannot be finetuned to downstream datasets. In this work, we consider the embedding-based two-tower architecture as our neural retrieval model. Since labeled data can be scarce and because neural retrieval models require vast amounts of data to train, we propose a novel method for generating synthetic training data for retrieval. Our system produces remarkable results, significantly outperforming BM25 on 5 out of 6 datasets tested, by an average of 2.45 points for Recall@1. In some cases, our model trained on synthetic data can even outperform the same model trained on real data

中文翻译:

通过查询生成基于嵌入的零样本检索

段落检索解决了在给定查询的情况下定位相关段落的问题,通常来自大型语料库。在实践中,像 BM25 这样的词法匹配算法因其效率而成为检索的流行选择。然而,基于术语的匹配算法经常会遗漏与查询没有词汇重叠且无法微调到下游数据集的相关段落。在这项工作中,我们将基于嵌入的两塔架构视为我们的神经检索模型。由于标记数据可能很少,而且神经检索模型需要大量数据进行训练,因此我们提出了一种生成用于检索的合成训练数据的新方法。我们的系统产生了显着的结果,在测试的 6 个数据集中的 5 个数据集中显着优于 BM25,Recall@1 的平均得分为 2.45 分。在某些情况下,
更新日期:2020-09-23
down
wechat
bug