当前位置: X-MOL 学术Bus. Inf. Syst. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Highly Accurate, But Still Discriminatory
Business & Information Systems Engineering ( IF 7.9 ) Pub Date : 2020-11-24 , DOI: 10.1007/s12599-020-00673-w
Alina Köchling , Shirin Riazy , Marius Claus Wehner , Katharina Simbeck

The study aims to identify whether algorithmic decision making leads to unfair (i.e., unequal) treatment of certain protected groups in the recruitment context. Firms increasingly implement algorithmic decision making to save costs and increase efficiency. Moreover, algorithmic decision making is considered to be fairer than human decisions due to social prejudices. Recent publications, however, imply that the fairness of algorithmic decision making is not necessarily given. Therefore, to investigate this further, highly accurate algorithms were used to analyze a pre-existing data set of 10,000 video clips of individuals in self-presentation settings. The analysis shows that the under-representation concerning gender and ethnicity in the training data set leads to an unpredictable overestimation and/or underestimation of the likelihood of inviting representatives of these groups to a job interview. Furthermore, algorithms replicate the existing inequalities in the data set. Firms have to be careful when implementing algorithmic video analysis during recruitment as biases occur if the underlying training data set is unbalanced.

中文翻译:

高度准确,但仍具有歧视性

该研究旨在确定算法决策是否会导致招聘环境中某些受保护群体受到不公平(即不平等)待遇。公司越来越多地实施算法决策以节省成本并提高效率。此外,由于社会偏见,算法决策被认为比人类决策更公平。然而,最近的出版物暗示算法决策的公平性不一定给出。因此,为了进一步研究这一点,我们使用高度准确的算法来分析预先存在的 10,000 个自我展示设置中的个人视频剪辑的数据集。分析表明,培训数据集中关于性别和种族的代表性不足导致对邀请这些群体的代表参加工作面试的可能性的不可预测的高估和/或低估。此外,算法复制了数据集中现有的不等式。公司在招聘期间实施算法视频分析时必须小心,因为如果基础培训数据集不平衡,就会出现偏差。
更新日期:2020-11-24
down
wechat
bug