当前位置: X-MOL 学术Criminal Justice and Behavior › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Evaluating Fairness of Algorithmic Risk Assessment Instruments: The Problem With Forcing Dichotomies
Criminal Justice and Behavior ( IF 2.1 ) Pub Date : 2021-08-28 , DOI: 10.1177/00938548211040544
Samantha A. Zottola , Sarah L. Desmarais 1 , Evan M. Lowder 2 , Sarah E. Duhart Clarke 3
Affiliation  

Researchers and stakeholders have developed many definitions to evaluate whether algorithmic pretrial risk assessment instruments are fair in terms of their error and accuracy. Error and accuracy are often operationalized using three sets of indicators: false-positive and false-negative percentages, false-positive and false-negative rates, and positive and negative predictive value. To calculate these indicators, a threshold must be set, and continuous risk scores must be dichotomized. We provide a data-driven examination of these three sets of indicators using data from three studies on the most widely used algorithmic pretrial risk assessment instruments: the Public Safety Assessment, the Virginia Pretrial Risk Assessment Instrument, and the Federal Pretrial Risk Assessment. Overall, our findings highlight how conclusions regarding fairness are affected by the limitations of these indicators. Future work should move toward examining whether there are biases in how the risk assessment scores are used to inform decision-making.



中文翻译:

评估算法风险评估工具的公平性:强制二分法的问题

研究人员和利益相关者已经制定了许多定义来评估算法审前风险评估工具在其错误和准确性方面是否公平。错误和准确性通常使用三组指标进行操作:假阳性和假阴性百分比、假阳性和假阴性率以及阳性和阴性预测值。要计算这些指标,必须设置一个阈值,并且必须对连续的风险评分进行二分。我们使用来自最广泛使用的算法审前风险评估工具的三项研究的数据对这三组指标进行数据驱动的检查:公共安全评估、弗吉尼亚审前风险评估工具和联邦审前风险评估。总体,我们的研究结果强调了这些指标的局限性如何影响有关公平性的结论。未来的工作应该转向检查在如何使用风险评估分数为决策提供信息方面是否存在偏见。

更新日期:2021-08-29
down
wechat
bug