当前位置: X-MOL 学术Ethics and Information Technology › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
From human resources to human rights: Impact assessments for hiring algorithms
Ethics and Information Technology ( IF 3.633 ) Pub Date : 2021-06-25 , DOI: 10.1007/s10676-021-09599-7
Josephine Yam , Joshua August Skorburg

Over the years, companies have adopted hiring algorithms because they promise wider job candidate pools, lower recruitment costs and less human bias. Despite these promises, they also bring perils. Using them can inflict unintentional harms on individual human rights. These include the five human rights to work, equality and nondiscrimination, privacy, free expression and free association. Despite the human rights harms of hiring algorithms, the AI ethics literature has predominantly focused on abstract ethical principles. This is problematic for two reasons. First, AI principles have been criticized for being vague and not actionable. Second, the use of vague ethical principles to discuss algorithmic risks does not provide any accountability. This lack of accountability creates an algorithmic accountability gap. Closing this gap is crucial because, without accountability, the use of hiring algorithms can lead to discrimination and unequal access to employment opportunities. This paper makes two contributions to the AI ethics literature. First, it frames the ethical risks of hiring algorithms using international human rights law as a universal standard for determining algorithmic accountability. Second, it evaluates four types of algorithmic impact assessments in terms of how effectively they address the five human rights of job applicants implicated in hiring algorithms. It determines which of the assessments can help companies audit their hiring algorithms and close the algorithmic accountability gap.



中文翻译:

从人力资源到人权:招聘算法的影响评估

多年来,公司采用招聘算法是因为它们承诺提供更广泛的求职者库、更低的招聘成本和更少的人为偏见。尽管有这些承诺,但它们也带来了危险。使用它们可能会对个人人权造成无意的伤害。其中包括五项工作人权、平等和非歧视、隐私、言论自由和结社自由。尽管招聘算法会损害人权,但人工智能伦理文献主要关注抽象的伦理原则。这是有问题的,原因有二。首先,人工智能原则被批评为含糊不清且不可操作。其次,使用模糊的道德原则来讨论算法风险并没有提供任何问责制。这种问责制的缺乏造成了算法问责制的差距。弥合这一差距至关重要,因为如果没有问责制,使用招聘算法可能会导致歧视和就业机会不平等。本文对人工智能伦理文献有两个贡献。首先,它使用国际人权法作为确定算法问责制的通用标准来构建雇用算法的道德风险。其次,它评估了四种类型的算法影响评估,它们如何有效地解决与招聘算法有关的求职者的五项人权问题。它确定哪些评估可以帮助公司审核其招聘算法并缩小算法问责制差距。本文对人工智能伦理文献有两个贡献。首先,它使用国际人权法作为确定算法问责制的通用标准来构建雇用算法的道德风险。其次,它评估了四种类型的算法影响评估,它们如何有效地解决与招聘算法有关的求职者的五项人权问题。它确定哪些评估可以帮助公司审核其招聘算法并缩小算法问责制差距。本文对人工智能伦理文献有两个贡献。首先,它使用国际人权法作为确定算法问责制的通用标准来构建雇用算法的道德风险。其次,它评估了四种类型的算法影响评估,它们如何有效地解决与招聘算法有关的求职者的五项人权问题。它确定哪些评估可以帮助公司审核其招聘算法并缩小算法问责制差距。它评估了四种类型的算法影响评估,它们如何有效地解决与招聘算法有关的求职者的五项人权问题。它确定哪些评估可以帮助公司审核其招聘算法并缩小算法问责制差距。它评估了四种类型的算法影响评估,它们如何有效地解决与招聘算法有关的求职者的五项人权问题。它确定哪些评估可以帮助公司审核其招聘算法并缩小算法问责制差距。

更新日期:2021-06-25
down
wechat
bug