当前位置: X-MOL 学术arXiv.cs.CY › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On the Applicability of ML Fairness Notions
arXiv - CS - Computers and Society Pub Date : 2020-06-30 , DOI: arxiv-2006.16745
Karima Makhlouf, Sami Zhioua, Catuscia Palamidessi

ML-based predictive systems are increasingly used to support decisions with a critical impact on individuals' lives such as college admission, job hiring, child custody, criminal risk assessment, etc. As a result, fairness emerged as an important requirement to guarantee that predictive systems do not discriminate against specific individuals or entire sub-populations, in particular, minorities. Given the inherent subjectivity of viewing the concept of fairness, several notions of fairness have been introduced in the literature. This paper is a survey of fairness notions that, unlike other surveys in the literature, addresses the question of "which notion of fairness is most suited to a given real-world scenario and why?". Our attempt to answer this question consists in (1) identifying the set of fairness-related characteristics of the real-world scenario at hand, (2) analyzing the behavior of each fairness notion, and then (3) fitting these two elements to recommend the most suitable fairness notion in every specific setup. The results are summarized in a decision diagram that can be used by practitioners and policy makers to navigate the relatively large catalogue of fairness notions.

中文翻译:

关于机器学习公平概念的适用性

基于 ML 的预测系统越来越多地用于支持对个人生活产生重大影响的决策,例如大学录取、工作招聘、儿童监护、犯罪风险评估等。 因此,公平性成为保证预测性的重要要求系统不歧视特定的个人或整个亚群,尤其是少数群体。鉴于查看公平概念的内在主观性,文献中引入了几个公平概念。这篇论文是对公平概念的调查,与文献中的其他调查不同,它解决了“哪种公平概念最适合给定的现实世界场景以及为什么?”的问题。我们试图回答这个问题包括 (1) 确定手头真实世界场景的公平相关特征集,(2) 分析每个公平概念的行为,然后 (3) 拟合这两个元素以推荐每个特定设置中最合适的公平概念。结果总结在一个决策图中,从业者和政策制定者可以使用它来浏览相对较大的公平概念目录。
更新日期:2020-10-20
down
wechat
bug