当前位置: X-MOL 学术International Journal of Law in Context › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fairness, accountability and transparency: notes on algorithmic decision-making in criminal justice
International Journal of Law in Context ( IF 1.170 ) Pub Date : 2019-06-20 , DOI: 10.1017/s1744552319000077
Vincent Chiao

Over the last few years, legal scholars, policy-makers, activists and others have generated a vast and rapidly expanding literature concerning the ethical ramifications of using artificial intelligence, machine learning, big data and predictive software in criminal justice contexts. These concerns can be clustered under the headings of fairness, accountability and transparency. First, can we trust technology to be fair, especially given that the data on which the technology is based are biased in various ways? Second, whom can we blame if the technology goes wrong, as it inevitably will on occasion? Finally, does it matter if we do not know how an algorithm works or, relatedly, cannot understand how it reached its decision? I argue that, while these are serious concerns, they are not irresolvable. More importantly, the very same concerns of fairness, accountability and transparency apply, with even greater urgency, to existing modes of decision-making in criminal justice. The question, hence, is comparative: can algorithmic modes of decision-making improve upon the status quo in criminal justice? There is unlikely to be a categorical answer to this question, although there are some reasons for cautious optimism.

中文翻译:

公平、问责和透明度:关于刑事司法算法决策的说明

在过去的几年里,法律学者、政策制定者、活动家和其他人产生了大量且迅速扩大的关于在刑事司法环境中使用人工智能、机器学习、大数据和预测软件的伦理后果的文献。这些问题可以归类在公平、问责和透明度的标题下。首先,我们能否相信技术是公平的,尤其是考虑到技术所基于的数据存在各种偏差?其次,如果技术出现问题,我们可以责怪谁,因为它有时会不可避免地出现?最后,如果我们不知道算法是如何工作的,或者相关地,无法理解它是如何做出决定的,这有关系吗?我认为,虽然这些是严重的问题,但它们并非无法解决。更重要的是,公平、问责制和透明度的同样关切也更加紧迫地适用于刑事司法现有的决策模式。因此,问题是比较的:决策的算法模式能否改善刑事司法的现状?这个问题不太可能有一个明确的答案,尽管有一些理由保持谨慎乐观。
更新日期:2019-06-20
down
wechat
bug