当前位置: X-MOL 学术Pattern Recogn. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
PASTLE: Pivot-aided space transformation for local explanations
Pattern Recognition Letters ( IF 5.1 ) Pub Date : 2021-06-15 , DOI: 10.1016/j.patrec.2021.05.018
Valerio La Gatta , Vincenzo Moscato , Marco Postiglione , Giancarlo Sperlì

During the last decade, more and more Artificial Intelligence systems have been designed using complex and sophisticated architectures to reach unprecedented predictive performance. The side effect is an increase in opacity of their inner workings which is inadmissible when such systems are applied in critical domains (healthcare, finance and so on). The eXplainable AI (XAI) research field aims to overcome this limitation thus helping humans to understand black-box decisions. In this paper we propose a novel model-agnostic XAI technique, named Pivot-Aided Space Transformation for Local Explanations (PASTLE), which exploits an instance-space transformation to explain any model’s predictions, aiming to enhance human trust towards the AI decisions. We experimentally evaluate the effects of the introduced space transformation on various real-world data sets and our user study reveals promising results in terms of effective explainability.



中文翻译:

PASTLE:用于局部解释的枢轴辅助空间转换

在过去的十年中,越来越多的人工智能系统被设计成使用复杂而精密的架构来达到前所未有的预测性能。副作用是其内部运作的不透明度增加,当此类系统应用于关键领域(医疗保健、金融等)时,这是不可接受的。该解释的AI(XAI)研究领域的目的是这样来克服这个限制有助于人类了解黑箱决策。在本文中,我们提出了一种新的模型不可知 XAI 技术,称为用于局部解释的 Pivot-Aided Space Transformation(PASTLE),它利用实例空间转换来解释任何模型的预测,旨在增强人类对 AI 决策的信任。我们通过实验评估了引入的空间变换对各种真实世界数据集的影响,我们的用户研究在有效可解释性方面显示了有希望的结果。

更新日期:2021-06-28
down
wechat
bug