当前位置: X-MOL 学术arXiv.cs.HC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Flaws of Policies Requiring Human Oversight of Government Algorithms
arXiv - CS - Human-Computer Interaction Pub Date : 2021-09-10 , DOI: arxiv-2109.05067
Ben Green

Policymakers around the world are increasingly considering how to prevent government uses of algorithms from producing injustices. One mechanism that has become a centerpiece of global efforts to regulate government algorithms is to require human oversight of algorithmic decisions. However, the functional quality of this regulatory approach has not been thoroughly interrogated. In this article, I survey 40 policies that prescribe human oversight of government algorithms and find that they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, human oversight policies legitimize government use of flawed and controversial algorithms without addressing the fundamental issues with these tools. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies provide a false sense of security in adopting algorithms and enable vendors and agencies to shirk accountability for algorithmic harms. In light of these flaws, I propose a more rigorous approach for determining whether and how to incorporate algorithms into government decision-making. First, policymakers must critically consider whether it is appropriate to use an algorithm at all in a specific context. Second, before deploying an algorithm alongside human oversight, vendors or agencies must conduct preliminary evaluations of whether people can effectively oversee the algorithm.

中文翻译:

需要人类监督政府算法的政策缺陷

世界各地的政策制定者越来越多地考虑如何防止政府对算法的使用造成不公正。一种已成为全球监管政府算法工作核心的机制是要求对算法决策进行人工监督。然而,这种监管方法的功能质量尚未得到彻底审查。在本文中,我调查了 40 项规定对政府算法进行人工监督的政策,发现它们存在两个重大缺陷。首先,有证据表明人们无法履行预期的监督职能。其次,人类监督政策使政府使用有缺陷和有争议的算法合法化,而没有解决这些工具的根本问题。因此,人类监督政策不是防止政府算法决策的潜在危害,而是在采用算法时提供一种虚假的安全感,并使供应商和机构能够推卸对算法危害的责任。鉴于这些缺陷,我提出了一种更严格的方法来确定是否以及如何将算法纳入政府决策。首先,政策制定者必须批判性地考虑在特定情况下使用算法是否合适。其次,在将算法与人工监督一起部署之前,供应商或机构必须对人们是否可以有效地监督算法进行初步评估。人为监督政策在采用算法时提供了一种虚假的安全感,并使供应商和机构能够逃避对算法危害的责任。鉴于这些缺陷,我提出了一种更严格的方法来确定是否以及如何将算法纳入政府决策。首先,政策制定者必须批判性地考虑在特定情况下使用算法是否合适。其次,在将算法与人工监督一起部署之前,供应商或机构必须对人们是否可以有效地监督算法进行初步评估。人为监督政策在采用算法时提供了一种虚假的安全感,并使供应商和机构能够逃避对算法危害的责任。鉴于这些缺陷,我提出了一种更严格的方法来确定是否以及如何将算法纳入政府决策。首先,政策制定者必须批判性地考虑在特定情况下使用算法是否合适。其次,在将算法与人工监督一起部署之前,供应商或机构必须对人们是否可以有效地监督算法进行初步评估。决策者必须批判性地考虑在特定情况下使用算法是否合适。其次,在将算法与人工监督一起部署之前,供应商或机构必须对人们是否可以有效地监督算法进行初步评估。决策者必须批判性地考虑在特定情况下使用算法是否合适。其次,在将算法与人工监督一起部署之前,供应商或机构必须对人们是否可以有效地监督算法进行初步评估。
更新日期:2021-09-14
down
wechat
bug