当前位置: X-MOL 学术Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Logic Explained Networks
Artificial Intelligence ( IF 5.1 ) Pub Date : 2022-11-16 , DOI: 10.1016/j.artint.2022.103822
Gabriele Ciravegna , Pietro Barbiero , Francesco Giannini , Marco Gori , Pietro Liò , Marco Maggini , Stefano Melacci

The large and still increasing popularity of deep learning clashes with a major limit of neural network architectures, that consists in their lack of capability in providing human-understandable motivations of their decisions. In situations in which the machine is expected to support the decision of human experts, providing a comprehensible explanation is a feature of crucial importance. The language used to communicate the explanations must be formal enough to be implementable in a machine and friendly enough to be understandable by a wide audience. In this paper, we propose a general approach to Explainable Artificial Intelligence in the case of neural architectures, showing how a mindful design of the networks leads to a family of interpretable deep learning models called Logic Explained Networks (LENs). LENs only require their inputs to be human-understandable predicates, and they provide explanations in terms of simple First-Order Logic (FOL) formulas involving such predicates. LENs are general enough to cover a large number of scenarios. Amongst them, we consider the case in which LENs are directly used as special classifiers with the capability of being explainable, or when they act as additional networks with the role of creating the conditions for making a black-box classifier explainable by FOL formulas. Despite supervised learning problems are mostly emphasized, we also show that LENs can learn and provide explanations in unsupervised learning settings. Experimental results on several datasets and tasks show that LENs may yield better classifications than established white-box models, such as decision trees and Bayesian rule lists, while providing more compact and meaningful explanations.



中文翻译:

逻辑解释网络

深度学习的广泛普及与神经网络架构的一个主要限制相冲突,即它们缺乏提供人类可理解的决策动机的能力。在期望机器支持人类专家决策的情况下,提供易于理解的解释是至关重要的特征。用于传达解释的语言必须足够正式,以便可以在机器中执行,并且足够友好,以便被广大受众理解。在本文中,我们提出了一种针对神经架构的可解释人工智能的通用方法,展示了精心设计的网络如何产生一系列可解释的深度学习模型,称为逻辑解释网络 (LEN)。LEN 仅要求其输入是人类可理解的谓词,并且它们根据涉及此类谓词的简单一阶逻辑 (FOL) 公式提供解释。LEN 的通用性足以涵盖大量场景。其中,我们考虑了 LEN 直接用作具有可解释能力的特殊分类器的情况,或者当它们充当附加网络的作用时,其作用是为使黑盒分类器可由 FOL 公式解释创造条件。尽管主要强调了监督学习问题,但我们还表明 LEN 可以在无监督学习环境中学习并提供解释。几个数据集和任务的实验结果表明,LEN 可能比已建立的白盒模型(例如决策树和贝叶斯规则列表)产生更好的分类,

更新日期:2022-11-17
down
wechat
bug