当前位置: X-MOL 学术Social Problems › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Technologies of Crime Prediction: The Reception of Algorithms in Policing and Criminal Courts
Social Problems ( IF 3.0 ) Pub Date : 2020-03-05 , DOI: 10.1093/socpro/spaa004
Sarah Brayne 1 , Angèle Christin 2
Affiliation  

The number of predictive technologies used in the U.S. criminal justice system is on the rise. Yet there is little research to date on the reception of algorithms in criminal justice institutions. We draw on ethnographic fieldwork conducted within a large urban police department and a midsized criminal court to assess the impact of predictive technologies at different stages of the criminal justice process. We first show that similar arguments are mobilized to justify the adoption of predictive algorithms in law enforcement and criminal courts. In both cases, algorithms are described as more objective and efficient than humans’ discretionary judgment. We then study how predictive algorithms are used, documenting similar processes of professional resistance among law enforcement and legal professionals. In both cases, resentment toward predictive algorithms is fueled by fears of deskilling and heightened managerial surveillance. Two practical strategies of resistance emerge: footdragging and data obfuscation. We conclude by discussing how predictive technologies do not replace, but rather displace discretion to less visible—and therefore less accountable— areas within organizations, a shift which has important implications for inequality and the administration of justice in the age of big data. K E Y W O R D S : algorithms; prediction; policing; criminal courts; ethnography. In recent years, algorithms and artificial intelligence have attracted a great deal of scholarly and journalistic attention. Of particular interest is the development of predictive technologies designed to estimate the likelihood of a future event, such as the probability that an individual will default on a loan, the likelihood that a consumer will buy a specific product online, or the odds that a job candidate will have a long tenure in an organization. Predictive algorithms capture the imagination of scholars and journalists alike, in part because they raise the question of automated judgment: the replacement – or at least the augmentation – of human discretion by mechanical procedures. Nowhere are these The authors contributed equally and are listed alphabetically. We would like to thank the three anonymous reviewers, as well as the organizers and participants of the Willen Seminar at Barnard College in 2016, “Punishment, Society, and Technology” session of the LSA Annual Meeting in 2018, and “Innovations and Technology in Studies of Crime and Social Control” session of the ASA Annual Meeting in 2018 for their helpful comments and feedback. Please direct correspondence to Sarah Brayne at the Department of Sociology at the University of Texas at Austin, 305 E. 23rd Street, A1700, RLP 3.306, Austin, TX 78712; telephone (512) 475-8641; email sbrayne@utexas.edu. VC The Author(s) 2020. Published by Oxford University Press on behalf of the Society for the Study of Social Problems. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. 1 Social Problems, 2020, 0, 1–17 doi: 10.1093/socpro/spaa004 Article D ow naded rom http/academ ic.p.com /socpro/advance-articleoi/10.1093/socpro/spaa004/5782114 by Snford U niersity user on 06 M arch 2020 questions more salient than in the context of criminal justice. Over recent decades, the U.S. criminal justice system has witnessed a proliferation of algorithmic technologies. Police departments now increasingly rely on predictive software programs to target potential victims and offenders and predict when and where future crimes are likely to occur (Brayne 2017; Ferguson 2017). Likewise, criminal courts use multiple predictive instruments, called “risk-assessment tools,” to assess the risk of recidivism or failure to appear in court among defendants (Hannah-Moffat 2018; Harcourt 2006; Monahan and Skeem 2016). Predictive technologies, in turn, raise many questions about fairness and inequality in criminal justice. On the positive side, advocates emphasize the benefits of using “smart statistics” to reduce crime and improve a dysfunctional criminal justice system characterized by racial discrimination and mass incarceration (Brantingham, Valasik, and Mohler 2018; Milgram 2012). On the negative side, critics argue that algorithms tend to embed bias and reinforce social and racial inequalities, rather than reducing them (Benjamin 2019; Eubanks 2018; O’Neil 2016). They note that predictive algorithms draw on variables or proxies that are unfair and may be unconstitutional (Ferguson 2017; Starr 2014). Many point out that predictive algorithms may lead individuals to be surveilled and detained based on crimes they have not committed yet, frequently comparing these technologies to the science-fiction story Minority Report by Philip K. Dick and its movie adaptation, which evoke a dystopian future. To date, studies of criminal justice algorithms share three main characteristics. First, existing work tends to focus on the construction of algorithms, highlighting the proprietary aspect of most of these tools (which are often built by private companies) and criticizing their opacity (Angwin et al. 2016; Pasquale 2015; Wexler 2017). Second, they tend to treat the criminal justice system as a monolith, lumping together the cases of law enforcement, adjudication, sentencing, and community supervision (O’Neil 2016; Scannell 2016). Third, and most importantly, most studies fail to analyze contexts of reception, implicitly assuming – usually without empirical data – that police officers, judges, and prosecutors rely uncritically on what algorithms direct them to do in their daily routines (Harcourt 2006; Hvistendahl 2016; Mohler et al. 2015; Uchida and Swatt 2013). In this article, we adopt a different perspective. Building on a growing body of literature that analyzes the impact of big data in criminal justice (Hannah-Moffat, Maurutto, and Turnbull 2009; Lageson 2017; Lum, Koper, and Willis 2017; Sanders, Weston, and Schott 2015; Stevenson and Doleac 2018), as well as existing ethnographic work on the uses of algorithms (Brayne 2017; Christin 2017; Levy 2015; Rosenblat and Stark 2016; Shestakovsky 2017), we focus on the reception of predictive algorithms in different segments of the criminal justice system. Drawing on two indepth ethnographic studies – one conducted in a police department and the other in a criminal court – we examine two questions. First, to what extent does the adoption of predictive algorithms affect work practices in policing and criminal courts? Second, how do practitioners respond to algorithmic technologies (i.e., do they embrace or contest them)? Based on this ethnographic material, this article provides several key findings. First, we document a widespread – albeit uneven – use of big data technologies on the ground. In policing, big data are used for both person-based and place-based predictive identification, in addition to risk management, crime analysis, and investigations. In criminal courts, multiple predictive instruments, complemented by digital case management systems, are employed to quantify the risk of the defendants. Second, similar arguments are used in policing and courts to justify the use of predictive technologies. In both cases, algorithms are presented as more rational and objective than “gut feelings” or discretionary judgments. Third, we find similar strategies of resistance, fueled by fears of experiential devaluation and increased managerial surveillance, among law enforcement and legal professionals—most importantly, foot-dragging and data obfuscation. Despite these resemblances, we document important differences between our two cases. In particular, law enforcement officers were under more direct pressure to use the algorithms, whereas the legal professionals under consideration were able to keep their distance and ignore predictive technologies without consequences, a finding we relate to the 2 Brayne and Christin D ow naded rom http/academ ic.p.com /socpro/advance-articleoi/10.1093/socpro/spaa004/5782114 by Snford U niersity user on 06 M arch 2020 distinct hierarchical structures and levels of managerial oversight of the police department and criminal court we compared. We conclude by discussing the implications of these findings for research on technology and inequality in criminal justice. Whereas the current wave of critical scholarship on algorithmic bias often leans upon technological deterministic narratives in order to make social justice claims, here we focus on the social and institutional contexts within which such predictive systems are deployed and negotiated. In the process, we show that these tools acquire political nuance and meaning through practice, which can lead to unanticipated or undesirable outcomes: forms of workplace surveillance and the displacement of discretion to less accountable places. We argue that this sheds new light on the transformations of police and judicial discretion – with important consequences for social and racial inequality – in the age of big data. D E C I S I O N M A K I N G A C R O S S A V A R I E T Y O F D O M A I N S As a growing number of daily activities now take place online, an unprecedented amount of digital information is being collected, stored, and analyzed, making it possible to aggregate data across previously separate institutional settings. Harnessing this rapidly expanding corpus of digitized information, algorithms – broadly defined here as “[a] formally specified sequence(s) of logical operations that provides step-by-step instructions for computers to act on data and thus automate decisions” (Barocas et al. 2014) – are being used to guide decision-making across institutional domains as varied as education, journalism, credit, and criminal justice (Brayne 2017; Christin 2018; Fourcade and Healy 2017; O’Neil 2016; Pasquale 2015). Advocates for algorithmic technologies argue that by relying on “unbiased” assessments, algorithms may help deploy resources more efficiently and objective

中文翻译:


犯罪预测技术:警务和刑事法庭对算法的接受



美国刑事司法系统中使用的预测技术的数量正在增加。然而,迄今为止,关于刑事司法机构对算法的接受情况的研究还很少。我们利用在大型城市警察局和中型刑事法院进行的人种学实地调查来评估预测技术在刑事司法过程不同阶段的影响。我们首先表明,类似的论点被用来证明执法和刑事法庭采用预测算法的合理性。在这两种情况下,算法都被描述为比人类的自由判断更加客观和高效。然后,我们研究如何使用预测算法,记录执法和法律专业人员之间类似的职业抵抗过程。在这两种情况下,对去技能化和加强管理监督的担忧加剧了对预测算法的不满。出现了两种实用的抵抗策略:拖延和数据混淆。最后,我们讨论了预测技术如何不会取代,而是将自由裁量权转移到组织内不那么可见(因此不那么负责)的领域,这一转变对大数据时代的不平等和司法管理具有重要影响。关键词:算法;预言;治安;刑事法庭;民族志。近年来,算法和人工智能引起了学术界和新闻界的广泛关注。 特别令人感兴趣的是预测技术的开发,旨在估计未来事件的可能性,例如个人拖欠贷款的概率、消费者在线购买特定产品的可能性,或者工作的可能性候选人将在组织中长期任职。预测算法吸引了学者和记者的想象力,部分原因是它们提出了自动判断的问题:用机械程序取代——或者至少增强——人类的判断力。这些作者的贡献并不相同,并且按字母顺序排列。感谢三位匿名审稿人,以及2016年巴纳德学院Willen研讨会、2018年LSA年会“惩罚、社会与技术”分会场、以及“创新与技术”分会场的组织者和参与者。 2018 年 ASA 年会的“犯罪和社会控制研究”会议,感谢他们的有益评论和反馈。请直接联系德克萨斯大学奥斯汀分校社会学系的 Sarah Brayne,地址:305 E. 23rd Street, A1700, RLP 3.306, Austin, TX 78712;电话 (512) 475-8641;电子邮件 sbrayne@utexas.edu。 VC 作者 2020。由牛津大学出版社代表社会问题研究学会出版。版权所有。如需权限,请发送电子邮件至:journals.permissions@oup.com。 1 Social Problems, 2020, 0, 1–17 doi: 10.1093/socpro/spaa004 文章下载于 http/academ ic.p.com /socpro/advance-articleoi/10.1093/socpro/spaa004/5782114,作者:Snford University 用户2020 年 3 月 6 日的问题比刑事司法背景下更为突出。近几十年来,美国 刑事司法系统见证了算法技术的激增。警察部门现在越来越依赖预测软件程序来定位潜在的受害者和犯罪者,并预测未来犯罪可能发生的时间和地点(Brayne 2017;Ferguson 2017)。同样,刑事法院使用多种称为“风险评估工具”的预测工具来评估被告累犯或未能出庭的风险(Hannah-Moffat 2018;Harcourt 2006;Monahan 和 Skeem 2016)。反过来,预测技术提出了许多有关刑事司法公平和不平等的问题。从积极的一面来看,倡导者强调使用“智能统计”来减少犯罪和改善以种族歧视和大规模监禁为特征的功能失调的刑事司法系统的好处(Brantingham、Valasik 和 Mohler 2018;Milgram 2012)。从消极的一面来看,批评者认为算法往往会嵌入偏见并加剧社会和种族不平等,而不是减少它们(Benjamin 2019;Eubanks 2018;O'Neil 2016)。他们指出,预测算法利用的变量或代理是不公平的,并且可能违反宪法(Ferguson 2017;Starr 2014)。许多人指出,预测算法可能会导致个人因尚未犯下的罪行而受到监视和拘留,并经常将这些技术与菲利普·K·迪克的科幻小说《少数派报告》及其改编电影进行比较,这些故事唤起了反乌托邦的未来。迄今为止,刑事司法算法的研究具有三个主要特点。 首先,现有的工作往往侧重于算法的构建,强调大多数这些工具(通常由私营公司构建)的专有方面,并批评它们的不透明性(Angwin et al. 2016;Pasquale 2015;Wexler 2017)。其次,他们倾向于将刑事司法系统视为一个整体,将执法、审判、量刑和社区监督等案件混为一谈(O'Neil 2016;Scannell 2016)。第三,也是最重要的一点,大多数研究未能分析接受的背景,隐含地假设——通常没有经验数据——警察、法官和检察官不加批判地依赖算法指导他们在日常生活中做什么(Harcourt 2006;Hvistendahl 2016) ;莫勒等人,2015 年;内田和斯瓦特,2013 年)。在这篇文章中,我们采用不同的视角。以越来越多分析大数据对刑事司法影响的文献为基础(Hannah-Moffat、Maurutto 和 Turnbull,2009 年;Lageson,2017 年;Lum、Koper 和 Willis,2017 年;Sanders、Weston 和 Schott,2015 年;Stevenson 和 Doleac) 2018),以及关于算法使用的现有人种学工作(Brayne 2017;Christin 2017;Levy 2015;Rosenblat 和 Stark 2016;Shestakovsky 2017),我们重点关注预测算法在刑事司法系统不同领域的接受情况。借鉴两项深入的人种学研究——一项在警察部门进行,另一项在刑事法庭进行——我们研究了两个问题。首先,预测算法的采用在多大程度上影响警务和刑事法庭的工作实践?其次,从业者如何应对算法技术(即他们是接受还是反对算法技术)?基于这种人种学材料,本文提供了几个关键发现。 首先,我们记录了大数据技术在实地的广泛使用(尽管不均衡)。在警务领域,除了风险管理、犯罪分析和调查之外,大数据还用于基于人员和基于地点的预测识别。在刑事法庭中,采用多种预测工具并辅以数字案件管理系统来量化被告的风险。其次,类似的论点也被用于警务和法庭来证明预测技术的使用是合理的。在这两种情况下,算法都比“直觉”或自行判断更加理性和客观。第三,我们在执法和法律专业人士中发现了类似的抵制策略,这种策略是由于担心经验贬值和加强管理监督而在执法和法律专业人员中出现的——最重要的是,拖延和数据混淆。尽管存在这些相似之处,我们还是记录了两个案例之间的重要差异。特别是,执法人员面临着使用算法的更直接压力,而正在考虑的法律专业人士能够保持距离并忽略预测技术而不会产生任何后果,这一发现与 2 Brayne 和 Christin D ow naded rom http 相关。 /academ ic.p.com /socpro/advance-articleoi/10.1093/socpro/spaa004/5782114 作者:Snford University 用户,2020 年 3 月 6 日 我们比较了警察部门和刑事法院的不同层级结构和管理监督级别。最后,我们讨论了这些发现对刑事司法技术和不平等研究的影响。 尽管当前关于算法偏见的批判学术浪潮通常依靠技术决定论的叙述来提出社会正义主张,但在这里我们重点关注部署和协商此类预测系统的社会和制度背景。在此过程中,我们表明这些工具通过实践获得了政治上的细微差别和意义,这可能会导致意料之外或不良的结果:工作场所监视的形式以及将自由裁量权转移到不太负责任的地方。我们认为,这为大数据时代警察和司法自由裁量权的转变提供了新的视角,这对社会和种族不平等产生了重要影响。跨各种领域的决策随着越来越多的日常活动在网上进行,前所未有的数字信息被收集、存储和分析,使得跨以前独立的机构设置聚合数据成为可能。利用这一快速扩展的数字化信息、算法的语料库——这里广泛定义为“正式指定的逻辑操作序列,为计算机提供逐步指令以对数据进行操作,从而自动做出决策”(Barocas et al. 2014)——被用来指导教育、新闻、信贷和刑事司法等不同机构领域的决策(Brayne 2017;Christin 2018;Fourcade 和 Healy 2017;O'Neil 2016;Pasquale 2015)。算法技术的倡导者认为,通过依赖“公正”的评估,算法可以帮助更有效和客观地部署资源
更新日期:2020-03-05
down
wechat
bug