当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
When Fair Ranking Meets Uncertain Inference
arXiv - CS - Information Retrieval Pub Date : 2021-05-05 , DOI: arxiv-2105.02091
Avijit Ghosh, Ritam Dutt, Christo Wilson

Existing fair ranking systems, especially those designed to be demographically fair, assume that accurate demographic information about individuals is available to the ranking algorithm. In practice, however, this assumption may not hold -- in real-world contexts like ranking job applicants or credit seekers, social and legal barriers may prevent algorithm operators from collecting peoples' demographic information. In these cases, algorithm operators may attempt to infer peoples' demographics and then supply these inferences as inputs to the ranking algorithm. In this study, we investigate how uncertainty and errors in demographic inference impact the fairness offered by fair ranking algorithms. Using simulations and three case studies with real datasets, we show how demographic inferences drawn from real systems can lead to unfair rankings. Our results suggest that developers should not use inferred demographic data as input to fair ranking algorithms, unless the inferences are extremely accurate.

中文翻译:

当公平排名遇到不确定的推论时

现有的公平排名系统,特别是那些旨在人口统计学上公平的系统,假设排名算法可以使用有关个人的准确人口统计学信息。但是,实际上,这种假设可能不成立-在现实世界中,例如对求职者或求职者进行排名时,社会和法律障碍可能会阻止算法运算符收集人们的人口统计信息。在这些情况下,算法运算符可能会尝试推断人们的人口统计资料,然后将这些推断作为排名算法的输入。在这项研究中,我们调查了人口统计推断中的不确定性和误差如何影响公平排名算法所提供的公平性。使用模拟和三个案例研究和真实数据集,我们展示了从真实系统中得出的人口统计学推论如何导致排名不公平。我们的结果表明,除非推断非常准确,否则开发人员不应将推断的人口统计数据用作公平排名算法的输入。
更新日期:2021-05-06
down
wechat
bug