当前位置: X-MOL 学术IEEE Signal Proc. Mag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explanatory Paradigms in Neural Networks: Towards relevant and contextual explanations
IEEE Signal Processing Magazine ( IF 14.9 ) Pub Date : 2022-06-29 , DOI: 10.1109/msp.2022.3163871
Ghassan AlRegib 1 , Mohit Prabhushankar 2
Affiliation  

In this article, we present a leap-forward expansion to the study of explainability in neural networks by considering explanations as answers to abstract reasoning-based questions. With P as the prediction from a neural network, these questions are “Why P?”, “What if not P?”, and “Why P, rather than Q?” for a given contrast prediction Q. The answers to these questions are observed correlations, counterfactuals, and contrastive explanations, respectively. Together, these explanations constitute the abductive reasoning scheme. The term observed refers to the specific case of posthoc explainability when an explanatory technique explains the decision P after a trained neural network has made the decision. The primary advantage of viewing explanations through the lens of abductive reasoning-based questions is that explanations can be used as reasons while making decisions. The posthoc field of explainability, which previously justified decisions, becomes active by being involved in the decision-making process and providing limited but relevant and contextual interventions. The contributions of this article are 1) realizing explanations as reasoning paradigms, 2) providing a probabilistic definition of observed explanations and their completeness, 3) creating a taxonomy for evaluation of explanations, and 4) positioning gradient-based complete explainability’s replicability and reproducibility across multiple applications and data modalities, and 5) code repositories, which are publicly available at https://github.com/olivesgatech/Explanatory-Paradigms.

中文翻译:

神经网络中的解释范式:走向相关和上下文解释

在本文中,我们通过将解释视为对基于抽象推理的问题的答案,对神经网络中的可解释性研究进行了跨越式扩展。将 P 作为神经网络的预测,这些问题是“为什么是P?”、“如果不是P怎么办?”和“为什么是P,而不是Q?” 对于给定的对比度预测Q。这些问题的答案分别是观察到的相关性、反事实和对比性解释。这些解释共同构成了溯因推理方案。当解释技术解释决策P时,观察到的术语是指事后可解释性的特定情况在经过训练的神经网络做出决定之后。通过基于溯因推理的问题来查看解释的主要优点是,解释可以在做出决定时用作理由。通过参与决策过程并提供有限但相关的上下文干预,先前证明决策合理的可解释性的事后领域变得活跃。本文的贡献是 1) 将解释实现为推理范式,2) 提供观察到的解释及其完整性的概率定义,3) 为解释的评估创建分类法,以及 4) 定位基于梯度的完全可解释性的可复制性和再现性多种应用程序和数据模式,以及 5) 代码存储库,https://github.com/olivesgatech/Explanatory-Paradigms.
更新日期:2022-07-01
down
wechat
bug