当前位置: X-MOL 学术Criminology & Public Policy › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Almost politically acceptable criminal justice risk assessment
Criminology & Public Policy ( IF 3.5 ) Pub Date : 2020-08-11 , DOI: 10.1111/1745-9133.12500
Richard Berk 1 , Ayya A. Elzarka 2
Affiliation  

In criminal justice risk forecasting, one can prove that it is impossible to optimize accuracy and fairness at the same time. One can also prove that usually it is impossible optimize simultaneously all of the usual group definitions of fairness. In policy settings, one necessarily is left with tradeoffs about which many stakeholders will adamantly disagree. The result is a contentious stalemate. In this article, we offer a different approach. We do not seek perfectly accurate and perfectly fair risk assessments. We seek politically acceptable risk assessments. We describe and apply a machine learning approach that addresses many of the most visible claims of “racial bias” to arraignment data on 300,000 offenders. Regardless of whether such claims are true, we adjust our procedures to compensate. We train the algorithm on White offenders only and compute risk with test data separately for White offenders and Black offenders. Thus, the fitted, algorithm structure is the same for both groups; the algorithm treats all offenders as if they are White. But because White and Black offenders can bring different predictors distributions to the White‐trained algorithm, we provide additional adjustments as needed.

中文翻译:

政治上几乎可以接受的刑事司法风险评估

在刑事司法风险预测中,可以证明不可能同时优化准确性和公平性。还可以证明,通常不可能同时优化所有常见的公平群体定义。在政策环境中,必然要进行权衡取舍,许多利益相关者将坚决不同意。结果是有争议的僵局。在本文中,我们提供了另一种方法。我们不寻求完全准确和公正的风险评估。我们寻求政治上可接受的风险评估。我们描述并应用了一种机器学习方法,该方法可解决许多最明显的“种族偏见”主张,以针对30万名罪犯提起诉讼。无论此类索赔是否真实,我们都会调整程序以进行赔偿。我们仅针对白人罪犯训练算法,并分别针对白人罪犯和黑人罪犯使用测试数据计算风险。因此,两组的拟合算法结构相同。该算法将所有罪犯都视为白人。但是由于白人和黑人罪犯可以为白人训练算法带来不同的预测变量分布,因此我们可以根据需要提供其他调整。
更新日期:2020-08-11
down
wechat
bug