当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Studying Catastrophic Forgetting in Neural Ranking Models
arXiv - CS - Information Retrieval Pub Date : 2021-01-18 , DOI: arxiv-2101.06984
Jesus Lovon-Melgarejo, Laure Soulier, Karen Pinel-Sauvagnat, Lynda Tamine

Several deep neural ranking models have been proposed in the recent IR literature. While their transferability to one target domain held by a dataset has been widely addressed using traditional domain adaptation strategies, the question of their cross-domain transferability is still under-studied. We study here in what extent neural ranking models catastrophically forget old knowledge acquired from previously observed domains after acquiring new knowledge, leading to performance decrease on those domains. Our experiments show that the effectiveness of neuralIR ranking models is achieved at the cost of catastrophic forgetting and that a lifelong learning strategy using a cross-domain regularizer success-fully mitigates the problem. Using an explanatory approach built on a regression model, we also show the effect of domain characteristics on the rise of catastrophic forgetting. We believe that the obtained results can be useful for both theoretical and practical future work in neural IR.

中文翻译:

在神经等级模型中研究灾难性遗忘

在最近的IR文献中已经提出了几种深度神经排名模型。尽管已使用传统的域自适应策略广泛解决了它们向数据集所拥有的一个目标域的可转移性问题,但仍未对其跨域可转移性问题进行研究。我们在这里研究神经排序模型在多大程度上灾难性地忘记了在获取新知识后从先前观察到的域中获取的旧知识,从而导致这些域的性能下降。我们的实验表明,以灾难性的遗忘为代价实现了NeuroIR排名模型的有效性,并且使用跨域正则化程序的终身学习策略可以成功地缓解该问题。使用基于回归模型的解释性方法,我们还显示了域特征对灾难性遗忘上升的影响。我们相信,所获得的结果对于神经IR的理论和实践未来工作都将是有用的。
更新日期:2021-01-19
down
wechat
bug