当前位置: X-MOL 学术Journal of Experimental Criminology › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Artificial fairness? Trust in algorithmic police decision-making
Journal of Experimental Criminology ( IF 1.8 ) Pub Date : 2021-09-12 , DOI: 10.1007/s11292-021-09484-9
Zoë Hobson 1 , Julia A Yesberg 1 , Ben Bradford 1 , Jonathan Jackson 2, 3
Affiliation  

Objectives

Test whether (1) people view a policing decision made by an algorithm as more or less trustworthy than when an officer makes the same decision; (2) people who are presented with a specific instance of algorithmic policing have greater or lesser support for the general use of algorithmic policing in general; and (3) people use trust as a heuristic through which to make sense of an unfamiliar technology like algorithmic policing.

Methods

An online experiment tested whether different decision-making methods, outcomes and scenario types affect judgements about the appropriateness and fairness of decision-making and the general acceptability of police use of this particular technology.

Results

People see a decision as less fair and less appropriate when an algorithm decides, compared to when an officer decides. Yet, perceptions of fairness and appropriateness were strong predictors of support for police use of algorithms, and being exposed to a successful use of an algorithm was linked, via trust in the decision made, to greater support for police use of algorithms.

Conclusions

Making decisions solely based on algorithms might damage trust, and the more police rely solely on algorithmic decision-making, the less trusting people may be in decisions. However, mere exposure to the successful use of algorithms seems to enhance the general acceptability of this technology.



中文翻译:

人为公平?信任算法警察决策

目标

测试(1)人们是否认为算法做出的警务决定比警官做出相同决定时更值得信赖或更不值得信赖;(2) 面对特定算法警务实例的人总体上或多或少地支持算法警务的一般使用;(3) 人们将信任作为一种启发式方法,通过它来理解算法警务等不熟悉的技术。

方法

一项在线实验测试了不同的决策方法、结果和场景类型是否会影响对决策适当性和公平性的判断,以及警察使用该特定技术的普遍可接受性。

结果

与官员做出决定相比,人们认为算法做出决定时的决定不那么公平和不合适。然而,对公平性和适当性的看法是支持警察使用算法的有力预测因素,并且通过对所做决定的信任,接触到算法的成功使用与对警察使用算法的更大支持有关。

结论

仅仅根据算法做出决定可能会损害信任,而警察越多地完全依赖算法决策,人们对决策的信任度就越低。然而,仅仅接触到算法的成功使用似乎就可以提高这项技术的普遍接受度。

更新日期:2021-09-12
down
wechat
bug