Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A statutory right to explanation for decisions generated using artificial intelligence
International Journal of Law and Information Technology Pub Date : 2020-08-25 , DOI: 10.1093/ijlit/eaaa016
Joshua Gacutan , Niloufer Selvadurai

Abstract
As artificial intelligence technologies are increasingly deployed by government and commercial entitles to generate automated and semi-automated decisions, the right to an explanation for such decisions has become a critical legal issue. As the internal logic of machine learning algorithms is typically opaque, the absence of a right to explanation can weaken an individual’s ability to challenge such decisions. This article considers the merits of enacting a statutory right to explanation for automated decisions. To this end, this article begins by considering a theoretical justification for a right to explanation, examines consequentialist and deontological approaches to protection and considers the appropriate ambit of such a right, comparing absolute transparency with partial transparency and counterfactual explanations. This article then analyses insights provided by the European Union’s General Data Protection Regulation before concluding by recommending an option for reform to protect the legitimate interests of individuals affected by automated decisions.


中文翻译:

法定解释权,用于使用人工智能生成的决策

摘要
随着政府和商业机构越来越多地使用人工智能技术来生成自动化和半自动化的决策,对此类决策进行解释的权利已成为至关重要的法律问题。由于机器学习算法的内部逻辑通常是不透明的,因此缺乏解释权会削弱个人挑战此类决策的能力。本文考虑了制定法定解释权以进行自动决策的优点。为此,本文首先考虑了解释权的理论依据,研究了结果主义和义务论的保护方法,并考虑了这种权利的适当范围,将绝对透明与部分透明和反事实解释进行了比较。
更新日期:2020-08-25
down
wechat
bug