当前位置: X-MOL 学术arXiv.cs.SC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
arXiv - CS - Symbolic Computation Pub Date : 2021-04-24 , DOI: arxiv-2104.11914
Natalia Díaz-Rodríguez, Alberto Lamas, Jules Sanchez, Gianni Franchi, Ivan Donadello, Siham Tabik, David Filliat, Policarpo Cruz, Rosana Montes, Francisco Herrera

The latest Deep Learning (DL) models for detection and classification have achieved an unprecedented performance over classical machine learning algorithms. However, DL models are black-box methods hard to debug, interpret, and certify. DL alone cannot provide explanations that can be validated by a non technical audience. In contrast, symbolic AI systems that convert concepts into rules or symbols -- such as knowledge graphs -- are easier to explain. However, they present lower generalisation and scaling capabilities. A very important challenge is to fuse DL representations with expert knowledge. One way to address this challenge, as well as the performance-explainability trade-off is by leveraging the best of both streams without obviating domain expert knowledge. We tackle such problem by considering the symbolic knowledge is expressed in form of a domain expert knowledge graph. We present the eXplainable Neural-symbolic learning (X-NeSyL) methodology, designed to learn both symbolic and deep representations, together with an explainability metric to assess the level of alignment of machine and human expert explanations. The ultimate objective is to fuse DL representations with expert domain knowledge during the learning process to serve as a sound basis for explainability. X-NeSyL methodology involves the concrete use of two notions of explanation at inference and training time respectively: 1) EXPLANet: Expert-aligned eXplainable Part-based cLAssifier NETwork Architecture, a compositional CNN that makes use of symbolic representations, and 2) SHAP-Backprop, an explainable AI-informed training procedure that guides the DL process to align with such symbolic representations in form of knowledge graphs. We showcase X-NeSyL methodology using MonuMAI dataset for monument facade image classification, and demonstrate that our approach improves explainability and performance.

中文翻译:

可替代的神经符号学习(X-NeSyL)方法,将深度学习表示与专家知识图融合:MonuMAI文化遗产用例

与传统的机器学习算法相比,用于检测和分类的最新深度学习(DL)模型取得了空前的性能。但是,DL模型是难以调试,解释和认证的黑盒方法。DL本身不能提供非技术人员可以验证的解释。相反,将概念转换为规则或符号的符号AI系统(例如知识图)更易于解释。但是,它们具有较低的泛化和缩放功能。一个非常重要的挑战是将DL表示与专家知识融合在一起。解决这一挑战以及在性能与性能之间进行权衡的一种方法是,在不影响领域专家知识的前提下,充分利用这两种数据流的优点。我们通过考虑以领域专家知识图的形式表示符号知识来解决此类问题。我们介绍了可解释的神经符号学习(X-NeSyL)方法,旨在学习符号表示法和深度表示法,以及可解释性度量标准,以评估机器和人类专家解释的对齐程度。最终目标是在学习过程中将DL表示法与专业领域知识融合在一起,从而为可解释性奠定良好的基础。X-NeSyL方法论分别涉及在推理和训练时两个解释概念的具体使用:1)解释:专家对齐的可扩展的基于部分的cLAssifier网络体系结构,一种使用符号表示的组合CNN,以及2)SHAP-反向传播,一种可解释的,具有AI知识的培训程序,可指导DL过程与知识图形式的此类符号表示保持一致。我们展示了使用MonuMAI数据集进行纪念碑立面图像分类的X-NeSyL方法,并证明了我们的方法提高了可解释性和性能。
更新日期:2021-04-27
down
wechat
bug