当前位置: X-MOL 学术arXiv.cs.CY › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach
arXiv - CS - Computers and Society Pub Date : 2020-03-13 , DOI: arxiv-2003.07703
Val\'erie Beaudouin (SES), Isabelle Bloch (IMAGES), David Bounie (IP Paris, ECOGE, SES), St\'ephan Cl\'emen\c{c}on (LPMA), Florence d'Alch\'e-Buc, James Eagan (DIVA), Winston Maxwell, Pavlo Mozharovskyi (IRMAR), Jayneel Parekh

The recent enthusiasm for artificial intelligence (AI) is due principally to advances in deep learning. Deep learning methods are remarkably accurate, but also opaque, which limits their potential use in safety-critical applications. To achieve trust and accountability, designers and operators of machine learning algorithms must be able to explain the inner workings, the results and the causes of failures of algorithms to users, regulators, and citizens. The originality of this paper is to combine technical, legal and economic aspects of explainability to develop a framework for defining the "right" level of explain-ability in a given context. We propose three logical steps: First, define the main contextual factors, such as who the audience of the explanation is, the operational context, the level of harm that the system could cause, and the legal/regulatory framework. This step will help characterize the operational and legal needs for explanation, and the corresponding social benefits. Second, examine the technical tools available, including post hoc approaches (input perturbation, saliency maps...) and hybrid AI approaches. Third, as function of the first two steps, choose the right levels of global and local explanation outputs, taking into the account the costs involved. We identify seven kinds of costs and emphasize that explanations are socially useful only when total social benefits exceed costs.

中文翻译:

灵活且特定于上下文的 AI 可解释性:一种多学科方法

最近对人工智能 (AI) 的热情主要归功于深度学习的进步。深度学习方法非常准确,但也不透明,这限制了它们在安全关键应用中的潜在用途。为了实现信任和问责,机器学习算法的设计者和操作者必须能够向用户、监管者和公民解释算法失败的内部运作、结果和原因。本文的独创性是结合可解释性的技术、法律和经济方面来开发一个框架,用于在给定的上下文中定义可解释性的“正确”级别。我们提出了三个合乎逻辑的步骤:首先,定义主要的上下文因素,例如解释的受众是谁,操作上下文,系统可能造​​成的危害程度,和法律/监管框架。这一步将有助于描述解释的操作和法律需求,以及相应的社会效益。其次,检查可用的技术工具,包括事后方法(输入扰动、显着图...)和混合 AI 方法。第三,作为前两个步骤的函数,考虑到所涉及的成本,选择正确的全局和局部解释输出水平。我们确定了七种成本,并强调只有当社会总收益超过成本时,解释才对社会有用。) 和混合 AI 方法。第三,作为前两个步骤的函数,考虑到所涉及的成本,选择正确的全局和局部解释输出水平。我们确定了七种成本,并强调只有当社会总收益超过成本时,解释才对社会有用。) 和混合 AI 方法。第三,作为前两个步骤的函数,考虑到所涉及的成本,选择正确的全局和局部解释输出水平。我们确定了七种成本,并强调只有当社会总收益超过成本时,解释才对社会有用。
更新日期:2020-03-18
down
wechat
bug