当前位置: X-MOL 学术Inform. Fusion › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: The MonuMAI cultural heritage use case
Information Fusion ( IF 14.7 ) Pub Date : 2021-10-13 , DOI: 10.1016/j.inffus.2021.09.022
Natalia Díaz-Rodríguez 1, 2, 3, 4 , Alberto Lamas 3, 4 , Jules Sanchez 1, 2 , Gianni Franchi 1, 2 , Ivan Donadello 5 , Siham Tabik 3, 4 , David Filliat 1, 2 , Policarpo Cruz 6 , Rosana Montes 4, 7 , Francisco Herrera 3, 4, 8
Affiliation  

The latest Deep Learning (DL) models for detection and classification have achieved an unprecedented performance over classical machine learning algorithms. However, DL models are black-box methods hard to debug, interpret, and certify. DL alone cannot provide explanations that can be validated by a non technical audience such as end-users or domain experts. In contrast, symbolic AI systems that convert concepts into rules or symbols – such as knowledge graphs – are easier to explain. However, they present lower generalization and scaling capabilities. A very important challenge is to fuse DL representations with expert knowledge. One way to address this challenge, as well as the performance-explainability trade-off is by leveraging the best of both streams without obviating domain expert knowledge. In this paper, we tackle such problem by considering the symbolic knowledge is expressed in form of a domain expert knowledge graph. We present the eXplainable Neural-symbolic learning (X-NeSyL) methodology, designed to learn both symbolic and deep representations, together with an explainability metric to assess the level of alignment of machine and human expert explanations. The ultimate objective is to fuse DL representations with expert domain knowledge during the learning process so it serves as a sound basis for explainability. In particular, X-NeSyL methodology involves the concrete use of two notions of explanation, both at inference and training time respectively: (1) EXPLANet: Expert-aligned eXplainable Part-based cLAssifier NETwork Architecture, a compositional convolutional neural network that makes use of symbolic representations, and (2) SHAP-Backprop, an explainable AI-informed training procedure that corrects and guides the DL process to align with such symbolic representations in form of knowledge graphs. We showcase X-NeSyL methodology using MonuMAI dataset for monument facade image classification, and demonstrate that with our approach, it is possible to improve explainability at the same time as performance.



中文翻译:

将深度学习表示与专家知识图融合的可解释神经符号学习 (X-NeSyL) 方法:MonuMAI 文化遗产用例

用于检测和分类的最新深度学习 (DL) 模型取得了超越经典机器学习算法的前所未有的性能。但是,DL 模型是难以调试、解释和验证的黑盒方法。DL 本身无法提供可由非技术受众(例如最终用户或领域专家)验证的解释。相比之下,将概念转换为规则或符号的符号 AI 系统(例如知识图谱)更容易解释。然而,它们呈现出较低的泛化和扩展能力。一个非常重要的挑战是将 DL 表示与专家知识融合。解决这一挑战以及性能-可解释性权衡的一种方法是在不排除领域专家知识的情况下利用两个流的优点。在本文中,我们通过考虑以领域专家知识图的形式表达符号知识来解决此类问题。我们提出了可解释的神经符号学习(X-NeSyL ) 方法,旨在学习符号和深度表示,以及可解释性指标,以评估机器和人类专家解释的对齐程度。最终目标是在学习过程中将 DL 表示与专家领域知识融合,使其成为可解释性的良好基础。特别是,X-NeSyL 方法涉及具体使用两种解释概念,分别在推理和训练时:(1)EXPLANet:专家对齐的基于可解释部分的分类器网络架构,一种组合卷积神经网络,利用符号表示,和(2)SHAP-Backprop,一种可解释的基于 AI 的训练程序,可纠正和指导 DL 过程以与知识图形式的此类符号表示保持一致。我们展示了使用 MonuMAI 数据集进行纪念碑立面图像分类的 X-NeSyL 方法,并证明使用我们的方法可以在提高性能的同时提高可解释性。

更新日期:2021-10-21
down
wechat
bug