当前位置: X-MOL 学术Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Spatial relation learning for explainable image classification and annotation in critical applications
Artificial Intelligence ( IF 5.1 ) Pub Date : 2021-03-01 , DOI: 10.1016/j.artint.2020.103434
Régis Pierrard , Jean-Philippe Poli , Céline Hudelot

Abstract With the recent successes of black-box models in Artificial Intelligence (AI) and the growing interactions between humans and AIs, explainability issues have risen. In this article, in the context of high-stake applications, we propose an approach for explainable classification and annotation of images. It is based on a transparent model, whose reasoning is accessible and human understandable, and on interpretable fuzzy relations that enable to express the vagueness of natural language. The knowledge about relations is set beforehand by an expert and thus training instances do not need to be annotated. The most relevant relations are extracted using a fuzzy frequent itemset mining algorithm in order to build rules, for classification, and constraints, for annotation. We also present two heuristics that make the process of evaluating relations faster. Since the strengths of our approach are the transparency of the model and the interpretability of the relations, an explanation in natural language can be generated. Supported by experimental results, we show that, given a segmentation of the input, our approach is able to successfully perform the target task and generate explanations that were judged as consistent and convincing by a set of participants.

中文翻译:

用于关键应用中可解释图像分类和注释的空间关系学习

摘要 随着最近人工智能 (AI) 中黑盒模型的成功以及人类与人工智能之间日益增长的交互,可解释性问题已经出现。在本文中,在高风险应用程序的背景下,我们提出了一种可解释的图像分类和注释方法。它基于透明模型,其推理是可访问且人类可理解的,以及可解释的模糊关系,能够表达自然语言的模糊性。关于关系的知识是由专家预先设置的,因此不需要对训练实例进行注释。使用模糊频繁项集挖掘算法提取最相关的关系,以构建用于分类的规则和用于注释的约束。我们还提出了两种启发式方法,可以使评估关系的过程更快。由于我们方法的优势在于模型的透明度和关系的可解释性,因此可以生成自然语言的解释。在实验结果的支持下,我们表明,给定输入的分割,我们的方法能够成功执行目标任务并生成被一组参与者判断为一致且令人信服的解释。
更新日期:2021-03-01
down
wechat
bug