当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On the Calibration and Uncertainty of Neural Learning to Rank Models
arXiv - CS - Information Retrieval Pub Date : 2021-01-12 , DOI: arxiv-2101.04356
Gustavo Penha, Claudia Hauff

According to the Probability Ranking Principle (PRP), ranking documents in decreasing order of their probability of relevance leads to an optimal document ranking for ad-hoc retrieval. The PRP holds when two conditions are met: [C1] the models are well calibrated, and, [C2] the probabilities of relevance are reported with certainty. We know however that deep neural networks (DNNs) are often not well calibrated and have several sources of uncertainty, and thus [C1] and [C2] might not be satisfied by neural rankers. Given the success of neural Learning to Rank (L2R) approaches-and here, especially BERT-based approaches-we first analyze under which circumstances deterministic, i.e. outputs point estimates, neural rankers are calibrated. Then, motivated by our findings we use two techniques to model the uncertainty of neural rankers leading to the proposed stochastic rankers, which output a predictive distribution of relevance as opposed to point estimates. Our experimental results on the ad-hoc retrieval task of conversation response ranking reveal that (i) BERT-based rankers are not robustly calibrated and that stochastic BERT-based rankers yield better calibration; and (ii) uncertainty estimation is beneficial for both risk-aware neural ranking, i.e.taking into account the uncertainty when ranking documents, and for predicting unanswerable conversational contexts.

中文翻译:

神经学习等级模型的标定与不确定性

根据概率排名原则(PRP),按照文档的相关概率从大到小的顺序对文档进行排名,可以为临时检索提供最佳的文档排名。当满足两个条件时,PRP成立:[C1]模型已经很好地校准,并且[C2]确实报告了相关概率。但是,我们知道,深度神经网络(DNN)通常没有得到很好的校准,并且具有多种不确定性来源,因此[神经分类器]可能无法满足[C1]和[C2]。给定神经学习排名(L2R)方法的成功-在这里,尤其是基于BERT的方法-我们首先分析确定性的情况,即输出点估计,对神经排名进行校准。然后,根据我们的发现,我们使用两种技术对导致提出的随机等级的神经等级的不确定性进行建模,从而输出预测性的相关性分布而不是点估计。我们对会话响应排序的临时检索任务的实验结果表明:(i)基于BERT的排名未得到稳健的校准,基于BERT的随机排名产生了更好的校准;(ii)不确定性估计对于风险感知的神经排名(即在对文档进行排名时将不确定性考虑在内)和预测无法回答的会话上下文都非常有利。我们对会话响应排序的临时检索任务的实验结果表明:(i)基于BERT的排名未得到稳健的校准,基于BERT的随机排名产生了更好的校准;(ii)不确定性估计对于风险感知的神经排名(即在对文档进行排名时将不确定性考虑在内)和预测无法回答的会话上下文都非常有利。我们对会话响应排序的临时检索任务的实验结果表明:(i)基于BERT的排名未得到稳健的校准,基于BERT的随机排名产生了更好的校准;(ii)不确定性估计对于风险感知的神经排名(即在对文档进行排名时将不确定性考虑在内)和预测无法回答的会话上下文都非常有利。
更新日期:2021-01-13
down
wechat
bug