当前位置: X-MOL 学术Comput. Law Secur. Rev. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explainable AI and the philosophy and practice of explanation
Computer Law & Security Review ( IF 3.3 ) Pub Date : 2020-10-05 , DOI: 10.1016/j.clsr.2020.105474
Kieron O'Hara

Considerations of the nature of explanation and the law are brought together to argue that computed accounts of AI systems’ outputs cannot function on their own as explanations of decisions informed by AI. The important context for this inquiry is set by Article 22(3) of GDPR. The paper looks at the question of what an explanation is from the point of view of the philosophy of science – i.e. it asks not what counts as explanatory in legal terms, or what an AI system might compute using provenance metadata, but rather what explanation as a social practice consists in, arguing that explanation is an illocutionary act, and that it should be considered as a process, not a text. It cannot therefore be computed, although computed accounts of AI systems are likely to be important inputs to the explanatory process.



中文翻译:

可解释的AI和解释的哲学与实践

考虑到解释的性质和法律,人们认为AI系统输出的计算结果不能单独用作AI决策的解释。这项查询的重要背景由GDPR第22(3)条设定。本文从科学哲学的角度研究了一个解释是什么的问题,即它不是在问法律意义上什么算作解释,也不是问问一个AI系统可以使用出处元数据来计算什么,而是问什么是解释。一种社会实践就在于,解释是言外之举,应将其视为一个过程而不是文本。因此,尽管计算得出的AI系统帐户可能是解释过程的重要输入,但它无法被计算。

更新日期:2020-10-05
down
wechat
bug