当前位置: X-MOL 学术arXiv.cs.HC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
From Philosophy to Interfaces: an Explanatory Method and a Tool Inspired by Achinstein's Theory of Explanation
arXiv - CS - Human-Computer Interaction Pub Date : 2021-09-09 , DOI: arxiv-2109.04171
Francesco Sovrano, Fabio Vitali

We propose a new method for explanations in Artificial Intelligence (AI) and a tool to test its expressive power within a user interface. In order to bridge the gap between philosophy and human-computer interfaces, we show a new approach for the generation of interactive explanations based on a sophisticated pipeline of AI algorithms for structuring natural language documents into knowledge graphs, answering questions effectively and satisfactorily. Among the mainstream philosophical theories of explanation we identified one that in our view is more easily applicable as a practical model for user-centric tools: Achinstein's Theory of Explanation. With this work we aim to prove that the theory proposed by Achinstein can be actually adapted for being implemented into a concrete software application, as an interactive process answering questions. To this end we found a way to handle the generic (archetypal) questions that implicitly characterise an explanatory processes as preliminary overviews rather than as answers to explicit questions, as commonly understood. To show the expressive power of this approach we designed and implemented a pipeline of AI algorithms for the generation of interactive explanations under the form of overviews, focusing on this aspect of explanations rather than on existing interfaces and presentation logic layers for question answering. We tested our hypothesis on a well-known XAI-powered credit approval system by IBM, comparing CEM, a static explanatory tool for post-hoc explanations, with an extension we developed adding interactive explanations based on our model. The results of the user study, involving more than 100 participants, showed that our proposed solution produced a statistically relevant improvement on effectiveness (U=931.0, p=0.036) over the baseline, thus giving evidence in favour of our theory.

中文翻译:

从哲学到接口:一种受阿钦斯坦解释理论启发的解释方法和工具

我们提出了一种解释人工智能 (AI) 的新方法,以及一种在用户界面中测试其表达能力的工具。为了弥合哲学和人机界面之间的差距,我们展示了一种基于复杂的人工智能算法管道生成交互式解释的新方法,用于将自然语言文档构建为知识图谱,有效且令人满意地回答问题。在主流的解释哲学理论中,我们确定了一种在我们看来更容易适用于以用户为中心的工具的实用模型的理论:阿钦斯坦的解释理论。通过这项工作,我们旨在证明 Achinstein 提出的理论实际上可以适应实施到具体的软件应用程序中,作为回答问题的交互式过程。为此,我们找到了一种处理通用(原型)问题的方法,这些问题将解释过程隐含地描述为初步概述,而不是通常理解的明确问题的答案。为了展示这种方法的表达能力,我们设计并实现了一系列 AI 算法,用于在概述的形式下生成交互式解释,重点关注解释的这一方面,而不是现有的用于问答的界面和表示逻辑层。我们在 IBM 著名的 XAI 驱动的信用审批系统上测试了我们的假设,将 CEM(一种用于事后解释的静态解释工具)与我们开发的扩展程序进行了比较,该扩展程序基于我们的模型添加了交互式解释。用户研究的结果,涉及 100 多名参与者,
更新日期:2021-09-10
down
wechat
bug