当前位置: X-MOL 学术Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The quest of parsimonious XAI: A human-agent architecture for explanation formulation
Artificial Intelligence ( IF 5.1 ) Pub Date : 2021-08-08 , DOI: 10.1016/j.artint.2021.103573
Yazan Mualla 1 , Igor Tchappi 1, 2, 3 , Timotheus Kampik 4 , Amro Najjar 5 , Davide Calvaresi 6 , Abdeljalil Abbas-Turki 1 , Stéphane Galland 1 , Christophe Nicolle 7
Affiliation  

With the widespread use of Artificial Intelligence (AI), understanding the behavior of intelligent agents and robots is crucial to guarantee successful human-agent collaboration since it is not straightforward for humans to understand an agent's state of mind. Recent empirical studies have confirmed that explaining a system's behavior to human users fosters the latter's acceptance of the system. However, providing overwhelming or unnecessary information may also confuse the users and cause failure. For these reasons, parsimony has been outlined as one of the key features allowing successful human-agent interaction with parsimonious explanation defined as the simplest explanation (i.e. least complex) that describes the situation adequately (i.e. descriptive adequacy). While parsimony is receiving growing attention in the literature, most of the works are carried out on the conceptual front. This paper proposes a mechanism for parsimonious eXplainable AI (XAI). In particular, it introduces the process of explanation formulation and proposes HAExA, a human-agent explainability architecture allowing to make it operational for remote robots. To provide parsimonious explanations, HAExA relies on both contrastive explanations and explanation filtering. To evaluate the proposed architecture, several research hypotheses are investigated in an empirical user study that relies on well-established XAI metrics to estimate how trustworthy and satisfactory the explanations provided by HAExA are. The results are analyzed using parametric and non-parametric statistical testing.



中文翻译:

简约 XAI 的探索:一种用于解释公式化的人机架构

随着人工智能 (AI) 的广泛使用,了解智能代理和机器人的行为对于确保成功的人机协作至关重要,因为人类了解代理的心理状态并非易事。最近的实证研究已经证实,向人类用户解释系统的行为会促进后者对系统的接受。但是,提供过多或不必要的信息也可能使用户感到困惑并导致失败。由于这些原因,简约已被概述为允许成功的人机交互的关键特征之一,简约解释被定义为最简单的解释(最简单的解释)充分描述情况(描述的充分性)。虽然简约在文学中受到越来越多的关注,但大多数作品都是在概念方面进行的。本文提出了一种简约可解释人工智能(XAI)的机制。特别介绍了解释制定的过程并提出了 HAExA,这是一种人工代理可解释性架构,允许使其对远程机器人进行操作。为了提供简洁的解释,HAExA 依赖于对比解释和解释过滤。为了评估提议的架构,在一项经验用户研究中调查了几个研究假设,该研究依赖于完善的 XAI 指标来估计 HAExA 提供的解释的可信度和满意度。使用参数和非参数统计检验分析结果。

更新日期:2021-09-01
down
wechat
bug