当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 20.8 ) Pub Date : 2021-09-24 , DOI: 10.1109/tpami.2021.3115452
Thomas Schnake 1 , Oliver Eberle 1 , Jonas Lederer 2 , Shinichi Nakajima 1 , Kristof T. Schutt 1 , Klaus-Robert Muller 3 , Gregoire Montavon 1
Affiliation  

Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data. As GNNs tightly entangle the input graph into the neural network structure, common explainable AI approaches are not applicable. To a large extent, GNNs have remained black-boxes for the user so far. In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions, i.e., by identifying groups of edges that jointly contribute to the prediction. Practically, we find that such explanations can be extracted using a nested attribution scheme, where existing techniques such as layer-wise relevance propagation (LRP) can be applied at each step. The output is a collection of walks into the input graph that are relevant for the prediction. Our novel explanation method, which we denote by GNN-LRP, is applicable to a broad range of graph neural networks and lets us extract practically relevant insights on sentiment analysis of text data, structure-property relationships in quantum chemistry, and image classification.

中文翻译:


通过相关游走对图神经网络进行高阶解释



图神经网络(GNN)是预测图结构化数据的流行方法。由于 GNN 将输入图紧密地缠绕到神经网络结构中,因此常见的可解释的人工智能方法不适用。到目前为止,GNN 在很大程度上对于用户来说仍然是黑匣子。在本文中,我们表明 GNN 实际上可以使用高阶扩展来自然地解释,即通过识别共同有助于预测的边缘组。实际上,我们发现可以使用嵌套归因方案来提取此类解释,其中可以在每个步骤应用分层相关性传播(LRP)等现有技术。输出是与预测相关的进入输入图中的行走的集合。我们用 GNN-LRP 表示的新颖的解释方法适用于广泛的图神经网络,使我们能够在文本数据的情感分析、量子化学中的结构-属性关系和图像分类方面提取实际相关的见解。
更新日期:2021-09-24
down
wechat
bug