当前位置: X-MOL 学术IEEE Trans. Signal Inf. Process. Over Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explaining Graph Neural Networks With Topology-Aware Node Selection: Application in Air Quality Inference
IEEE Transactions on Signal and Information Processing over Networks ( IF 3.0 ) Pub Date : 6-20-2022 , DOI: 10.1109/tsipn.2022.3180679
Esther Rodrigo Bonet 1 , Tien Huu Do 1 , Xuening Qin 2 , Jelle Hofman 3 , Valerio Panzica La Manna 3 , Wilfried Philips 2 , Nikos Deligiannis 1
Affiliation  

Graph neural networks (GNNs) have proven their ability in modelling graph-structured data in diverse domains, including natural language processing and computer vision. However, like other deep learning models, the lack of explainability is becoming a major drawback for GNNs, especially in health-related applications such as air pollution estimation, where a model’s predictions might directly affect humans’ health and habits. In this paper, we present a novel post-hoc explainability framework for GNN-based models. More concretely, we propose a novel topology-aware kernelised node selection method, which we apply over the graph structural and air pollution information. Thanks to the proposed model, we are able to effectively capture the graph topology and, for a certain graph node, infer its most relevant nodes. Additionally, we propose a novel topological node embedding for each node, capturing in a vector-shape the graph walks with respect to every other graph node. To prove the effectiveness of our explanation method, we include commonly employed evaluation metrics as well as fidelity, sparsity and contrastivity, and adapt them to evaluate explainability on a regression task. Extensive experiments on two real-world air pollution data sets demonstrate and visually show the effectiveness of the proposed method.

中文翻译:


用拓扑感知节点选择解释图神经网络:在空气质量推断中的应用



图神经网络(GNN)已经证明了它们在自然语言处理和计算机视觉等不同领域对图结构数据进行建模的能力。然而,与其他深度学习模型一样,缺乏可解释性正成为 GNN 的主要缺点,特别是在空气污染估计等与健康相关的应用中,模型的预测可能会直接影响人类的健康和习惯。在本文中,我们为基于 GNN 的模型提出了一种新颖的事后可解释性框架。更具体地说,我们提出了一种新颖的拓扑感知核化节点选择方法,我们将其应用于图结构和空气污染信息。由于所提出的模型,我们能够有效地捕获图拓扑,并针对某个图节点推断其最相关的节点。此外,我们为每个节点提出了一种新颖的拓扑节点嵌入,以向量形状捕获图相对于每个其他图节点的行走。为了证明我们的解释方法的有效性,我们包括常用的评估指标以及保真度、稀疏性和对比性,并调整它们来评估回归任务的可解释性。对两个真实世界空气污染数据集的大量实验证明并直观地展示了所提出方法的有效性。
更新日期:2024-08-26
down
wechat
bug