当前位置: X-MOL 学术Michigan Law Review › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Equal Protection Under Algorithms: A New Statistical and Legal Framework
Michigan Law Review ( IF 2.527 ) Pub Date : 2020-01-01 , DOI: 10.36644/mlr.119.2.equal
Crystal Yang 1 , Will Dobbie 2
Affiliation  

In this paper, we provide a new statistical and legal framework to understand the legality and fairness of predictive algorithms under the Equal Protection Clause. We begin by reviewing the main legal concerns regarding the use of protected characteristics such as race and the correlates of protected characteristics such as criminal history. The use of race and non-race correlates in predictive algorithms generates direct and proxy effects of race, respectively, that can lead to racial disparities that many view as unwarranted and discriminatory. These effects have led to the mainstream legal consensus that the use of race and non-race correlates in predictive algorithms is both problematic and potentially unconstitutional under the Equal Protection Clause. This mainstream position is also reflected in practice, with all commonly-used predictive algorithms excluding race and many excluding non-race correlates such as employment and education. In the second part of the paper, we challenge the mainstream legal position that the use of a protected characteristic always violates the Equal Protection Clause. We first develop a statistical framework that formalizes exactly how the direct and proxy effects of race can lead to algorithmic predictions that disadvantage minorities relative to non-minorities. While an overly formalistic legal solution requires exclusion of race and all potential non-race correlates, we show that this type of algorithm is unlikely to work in practice because nearly all algorithmic inputs are correlated with race. We then show that there are two simple statistical solutions that can eliminate the direct and proxy effects of race, and which are implementable even when all inputs are correlated with race. We argue that our proposed algorithms uphold the principles of the Equal Protection doctrine because they ensure that individuals are not treated differently on the basis of membership in a protected class, in stark contrast to commonly-used algorithms that unfairly disadvantage minorities despite the exclusion of race. We conclude by empirically testing our proposed algorithms in the context of the New York City pretrial system. We show that nearly all commonly-used algorithms violate certain principles underlying the Equal Protection Clause by including variables that are correlated with race, generating substantial proxy effects that unfairly disadvantage blacks relative to whites. Both of our proposed algorithms substantially reduce the number of black defendants detained compared to these commonly-used algorithms by eliminating these proxy effects. These findings suggest a fundamental rethinking of the Equal Protection doctrine as it applies to predictive algorithms and the folly of relying on commonly-used algorithms.

中文翻译:

算法下的平等保护:新的统计和法律框架

在本文中,我们提供了一个新的统计和法律框架来理解平等保护条款下预测算法的合法性和公平性。我们首先审查有关使用受保护特征(例如种族)和受保护特征(例如犯罪历史)的关联的主要法律问题。在预测算法中使用种族和非种族相关性分别产生种族的直接和代理效应,这可能导致种族差异,许多人认为种族差异是毫无根据的和歧视性的。这些影响导致主流法律共识,即在预测算法中使用种族和非种族相关性既存在问题,又可能违反平等保护条款。这种主流立场也体现在实践中,所有常用的预测算法都排除了种族,许多排除了非种族相关性,例如就业和教育。在论文的第二部分,我们挑战了主流法律立场,即使用受保护的特征总是违反平等保护条款。我们首先开发了一个统计框架,该框架准确地说明了种族的直接和代理效应如何导致算法预测相对于非少数群体而言处于不利地位。虽然过于形式化的法律解决方案需要排除种族和所有潜在的非种族相关性,但我们表明这种类型的算法不太可能在实践中起作用,因为几乎所有的算法输入都与种族相关。然后我们证明有两种简单的统计解决方案可以消除种族的直接和代理影响,即使所有输入都与种族相关,这些也是可以实现的。我们认为,我们提出的算法坚持平等保护原则的原则,因为它们确保个人不会因受保护类别的成员身份而受到区别对待,这与常用算法形成鲜明对比,尽管排除种族,但不公平地使少数群体处于不利地位. 我们通过在纽约市预审系统的背景下对我们提出的算法进行经验测试来得出结论。我们表明,几乎所有常用算法都违反了平等保护条款背后的某些原则,包括与种族相关的变量,产生大量代理效应,使黑人相对于白人不公平地处于不利地位。与这些常用算法相比,我们提出的两种算法通过消除这些代理效应,大大减少了被拘留的黑人被告的数量。这些发现表明对平等保护原则进行了根本性的重新思考,因为它适用于预测算法和依赖常用算法的愚蠢行为。
更新日期:2020-01-01
down
wechat
bug