当前位置: X-MOL 学术Am. Stat. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Survey of Bias in Machine Learning Through the Prism of Statistical Parity
The American Statistician ( IF 1.8 ) Pub Date : 2021-08-25 , DOI: 10.1080/00031305.2021.1952897
P. Besse 1 , E. del Barrio 2 , P. Gordaliza 2, 3 , J-M. Loubes 1 , L. Risser 1
Affiliation  

Abstract

Applications based on machine learning models have now become an indispensable part of the everyday life and the professional world. As a consequence, a critical question has recently arose among the population: Do algorithmic decisions convey any type of discrimination against specific groups of population or minorities? In this article, we show the importance of understanding how bias can be introduced into automatic decisions. We first present a mathematical framework for the fair learning problem, specifically in the binary classification setting. We then propose to quantify the presence of bias by using the standard disparate impact index on the real and well-known adult income dataset. Finally, we check the performance of different approaches aiming to reduce the bias in binary classification outcomes. Importantly, we show that some intuitive methods are ineffective with respect to the statistical parity criterion. This sheds light on the fact that trying to make fair machine learning models may be a particularly challenging task, in particular when the training observations contain some bias.



中文翻译:

通过统计奇偶性棱镜调查机器学习中的偏差

摘要

基于机器学习模型的应用程序现在已成为日常生活和专业领域不可或缺的一部分。因此,最近在人群中出现了一个关键问题:算法决策是否传达了对特定人群或少数民族的任何类型的歧视?在本文中,我们展示了理解如何将偏见引入自动决策的重要性。我们首先提出了一个公平学习问题的数学框架,特别是在二进制分类设置中。然后,我们建议通过对真实和众所周知的成人收入数据集使用标准的不同影响指数来量化偏见的存在。最后,我们检查了旨在减少二元分类结果偏差的不同方法的性能。重要的,我们表明,一些直观的方法在统计奇偶性标准方面是无效的。这揭示了一个事实,即试图建立公平的机器学习模型可能是一项特别具有挑战性的任务,特别是当训练观察包含一些偏差时。

更新日期:2021-08-25
down
wechat
bug