当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Longformer for MS MARCO Document Re-ranking Task
arXiv - CS - Information Retrieval Pub Date : 2020-09-20 , DOI: arxiv-2009.09392
Ivan Sekuli\'c, Amir Soleimani, Mohammad Aliannejadi, Fabio Crestani

Two step document ranking, where the initial retrieval is done by a classical information retrieval method, followed by neural re-ranking model, is the new standard. The best performance is achieved by using transformer-based models as re-rankers, e.g., BERT. We employ Longformer, a BERT-like model for long documents, on the MS MARCO document re-ranking task. The complete code used for training the model can be found on: https://github.com/isekulic/longformer-marco

中文翻译:

用于 MS MARCO 文档重新排序任务的 Longformer

两步文档排序,其中初始检索通过经典信息检索方法完成,然后是神经重新排序模型,是新标准。最好的性能是通过使用基于转换器的模型作为重新排序器来实现的,例如 BERT。我们在 MS MARCO 文档重新排序任务中采用了 Longformer,这是一种类似于 BERT 的长文档模型。可以在以下位置找到用于训练模型的完整代码:https://github.com/isekulic/longformer-marco
更新日期:2020-09-22
down
wechat
bug