当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Membership Inference Attack on Graph Neural Networks
arXiv - CS - Cryptography and Security Pub Date : 2021-01-17 , DOI: arxiv-2101.06570
Iyiola E. Olatunji, Wolfgang Nejdl, Megha Khosla

Graph Neural Networks (GNNs), which generalize traditional deep neural networks or graph data, have achieved state of the art performance on several graph analytical tasks like node classification, link prediction or graph classification. We focus on how trained GNN models could leak information about the \emph{member} nodes that they were trained on. In particular, we focus on answering the question: given a graph, can we determine which nodes were used for training the GNN model? We operate in the inductive settings for node classification, which means that none of the nodes in the test set (or the \emph{non-member} nodes) were seen during the training. We propose a simple attack model which is able to distinguish between the member and non-member nodes while just having a black-box access to the model. We experimentally compare the privacy risks of four representative GNN models. Our results show that all the studied GNN models are vulnerable to privacy leakage. While in traditional machine learning models, overfitting is considered the main cause of such leakage, we show that in GNNs the additional structural information is the major contributing factor.

中文翻译:

图神经网络的成员推理攻击

图神经网络(GNN)可以对传统的深层神经网络或图数据进行泛化,在一些图分析任务(例如节点分类,链接预测或图分类)上已经达到了最先进的性能。我们关注的是经过训练的GNN模型如何泄漏有关对其进行训练的\ emph {member}节点的信息。特别是,我们专注于回答这个问题:给定一个图,我们可以确定用于训练GNN模型的节点吗?我们在归纳设置中对节点分类进行操作,这意味着在训练期间未看到测试集中的任何节点(或\ emph {non-member}节点)。我们提出了一个简单的攻击模型,该模型能够在对成员节点和非成员节点进行区分的同时仅对模型进行黑盒访问。我们通过实验比较了四个代表性GNN模型的隐私风险。我们的结果表明,所有研究的GNN模型都容易受到隐私泄露的影响。虽然在传统的机器学习模型中,过度拟合被认为是造成此类泄漏的主要原因,但我们表明,在GNN中,额外的结构信息是主要的影响因素。
更新日期:2021-01-19
down
wechat
bug