Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Artificial intelligence explainability: the technical and ethical dimensions
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences ( IF 4.3 ) Pub Date : 2021-08-16 , DOI: 10.1098/rsta.2020.0363
John A McDermid 1 , Yan Jia 1 , Zoe Porter 1 , Ibrahim Habli 1
Affiliation  

In recent years, several new technical methods have been developed to make AI-models more transparent and interpretable. These techniques are often referred to collectively as ‘AI explainability’ or ‘XAI’ methods. This paper presents an overview of XAI methods, and links them to stakeholder purposes for seeking an explanation. Because the underlying stakeholder purposes are broadly ethical in nature, we see this analysis as a contribution towards bringing together the technical and ethical dimensions of XAI. We emphasize that use of XAI methods must be linked to explanations of human decisions made during the development life cycle. Situated within that wider accountability framework, our analysis may offer a helpful starting point for designers, safety engineers, service providers and regulators who need to make practical judgements about which XAI methods to employ or to require.

This article is part of the theme issue ‘Towards symbiotic autonomous systems’.



中文翻译:


人工智能的可解释性:技术和伦理维度



近年来,已经开发了几种新技术方法,使人工智能模型更加透明和可解释。这些技术通常统称为“AI 可解释性”或“XAI”方法。本文概述了 XAI 方法,并将它们与利益相关者的目的联系起来以寻求解释。由于潜在的利益相关者目的本质上具有广泛的道德性,因此我们认为此分析有助于将 XAI 的技术和道德维度结合在一起。我们强调,XAI 方法的使用必须与开发生命周期中人类决策的解释联系起来。在更广泛的问责框架内,我们的分析可能为需要对采用或要求哪种 XAI 方法做出实际判断的设计师、安全工程师、服务提供商和监管机构提供有用的起点。


本文是“迈向共生自治系统”主题的一部分。

更新日期:2021-08-16
down
wechat
bug