当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SmoothI: Smooth Rank Indicators for Differentiable IR Metrics
arXiv - CS - Information Retrieval Pub Date : 2021-05-03 , DOI: arxiv-2105.00942
Thibaut Thonet, Yagmur Gizem Cinar, Eric Gaussier, Minghan Li, Jean-Michel Renders

Information retrieval (IR) systems traditionally aim to maximize metrics built on rankings, such as precision or NDCG. However, the non-differentiability of the ranking operation prevents direct optimization of such metrics in state-of-the-art neural IR models, which rely entirely on the ability to compute meaningful gradients. To address this shortcoming, we propose SmoothI, a smooth approximation of rank indicators that serves as a basic building block to devise differentiable approximations of IR metrics. We further provide theoretical guarantees on SmoothI and derived approximations, showing in particular that the approximation errors decrease exponentially with an inverse temperature-like hyperparameter that controls the quality of the approximations. Extensive experiments conducted on four standard learning-to-rank datasets validate the efficacy of the listwise losses based on SmoothI, in comparison to previously proposed ones. Additional experiments with a vanilla BERT ranking model on a text-based IR task also confirm the benefits of our listwise approach.

中文翻译:

SmoothI:用于可区分的红外度量的平滑等级指示器

信息检索(IR)系统传统上旨在最大化基于排名的度量标准,例如精度或NDCG。但是,排名操作的不可微性阻止了在最新的神经IR模型中直接优化此类指标的情况,这些模型完全依赖于计算有意义的梯度的能力。为了解决这个缺点,我们提出了SmoothI,这是秩指标的平滑近似,它是设计IR度量的可微近似的基本构建块。我们进一步为SmoothI和派生的近似值提供了理论上的保证,尤其显示了近似误差通过控制近似质量的逆温度样超参数呈指数下降。与先前提出的方法相比,在四个标准的按等级学习的数据集上进行的大量实验验证了基于SmoothI的按列表损失的有效性。在基于文本的IR任务上使用香草BERT排名模型进行的其他实验也证实了我们的listwise方法的好处。
更新日期:2021-05-04
down
wechat
bug