当前位置: X-MOL 学术Information & Communications Technology Law › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Algorithmic risk assessment policing models: lessons from the Durham HART model and ‘Experimental’ proportionality
Information & Communications Technology Law ( IF 1.8 ) Pub Date : 2018-04-03 , DOI: 10.1080/13600834.2018.1458455
Marion Oswald 1 , Jamie Grace 2 , Sheena Urwin 3 , Geoffrey C. Barnes 4
Affiliation  

ABSTRACT As is common across the public sector, the UK police service is under pressure to do more with less, to target resources more efficiently and take steps to identify threats proactively; for example under risk-assessment schemes such as ‘Clare’s Law’ and ‘Sarah’s Law’. Algorithmic tools promise to improve a police force’s decision-making and prediction abilities by making better use of data (including intelligence), both from inside and outside the force. This article uses Durham Constabulary’s Harm Assessment Risk Tool (HART) as a case-study. HART is one of the first algorithmic models to be deployed by a UK police force in an operational capacity. Our article comments upon the potential benefits of such tools, explains the concept and method of HART and considers the results of the first validation of the model’s use and accuracy. The article then critiques the use of algorithmic tools within policing from a societal and legal perspective, focusing in particular upon substantive common law grounds for judicial review. It considers a concept of ‘experimental’ proportionality to permit the use of unproven algorithms in the public sector in a controlled and time-limited way, and as part of a combination of approaches to combat algorithmic opacity, proposes ‘ALGO-CARE’, a guidance framework of some of the key legal and practical concerns that should be considered in relation to the use of algorithmic risk assessment tools by the police. The article concludes that for the use of algorithmic tools in a policing context to result in a ‘better’ outcome, that is to say, a more efficient use of police resources in a landscape of more consistent, evidence-based decision-making, then an ‘experimental’ proportionality approach should be developed to ensure that new solutions from ‘big data’ can be found for criminal justice problems traditionally arising from clouded, non-augmented decision-making. Finally, this article notes that there is a sub-set of decisions around which there is too great an impact upon society and upon the welfare of individuals for them to be influenced by an emerging technology; to an extent, in fact, that they should be removed from the influence of algorithmic decision-making altogether.

中文翻译:

算法风险评估警务模型:来自达勒姆 HART 模型和“实验”比例的经验教训

摘要 正如公共部门普遍存在的那样,英国警察部门面临着事半功倍、更有效地瞄准资源并采取措施主动识别威胁的压力;例如,根据“克莱尔定律”和“莎拉定律”等风险评估计划。算法工具有望通过更好地利用来自部队内部和外部的数据(包括情报)来提高警察部队的决策和预测能力。本文使用 Durham Constabulary 的危害评估风险工具 (HART) 作为案例研究。HART 是英国警察部队以行动能力部署的首批算法模型之一。我们的文章评论了此类工具的潜在优势,解释了 HART 的概念和方法,并考虑了模型使用和准确性的首次验证结果。然后,文章从社会和法律的角度批评了警务中算法工具的使用,特别关注司法审查的实质性普通法依据。它考虑了“实验性”比例的概念,以允许在公共部门以受控和限时的方式使用未经证实的算法,并且作为对抗算法不透明性的组合方法的一部分,提出了“ALGO-CARE”,与警方使用算法风险评估工具相关的一些关键法律和实践问题的指导框架。文章的结论是,为了在警务环境中使用算法工具产生“更好”的结果,也就是说,在更一致、基于证据的决策环境中更有效地使用警务资源,然后应该开发一种“实验性”比例方法,以确保可以从“大数据”中找到新的解决方案,以解决传统上由云计算、非增强决策引起的刑事司法问题。最后,本文指出,有一些决策子集对社会和个人福利的影响太大,以至于他们无法受到新兴技术的影响;事实上,在某种程度上,它们应该完全不受算法决策的影响。本文指出,有一些决策子集对社会和个人福利的影响太大,以至于他们无法受到新兴技术的影响;事实上,在某种程度上,它们应该完全不受算法决策的影响。本文指出,有一些决策子集对社会和个人福利的影响太大,以至于他们无法受到新兴技术的影响;事实上,在某种程度上,它们应该完全不受算法决策的影响。
更新日期:2018-04-03
down
wechat
bug