当前位置: X-MOL 学术Data Min. Knowl. Discov. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Scalable attack on graph data by injecting vicious nodes
Data Mining and Knowledge Discovery ( IF 4.8 ) Pub Date : 2020-06-17 , DOI: 10.1007/s10618-020-00696-7
Jihong Wang , Minnan Luo , Fnu Suya , Jundong Li , Zijiang Yang , Qinghua Zheng

Recent studies have shown that graph convolution networks (GCNs) are vulnerable to carefully designed attacks, which aim to cause misclassification of a specific node on the graph with unnoticeable perturbations. However, a vast majority of existing works cannot handle large-scale graphs because of their high time complexity. Additionally, existing works mainly focus on manipulating existing nodes on the graph, while in practice, attackers usually do not have the privilege to modify information of existing nodes. In this paper, we develop a more scalable framework named Approximate Fast Gradient Sign Method which considers a more practical attack scenario where adversaries can only inject new vicious nodes to the graph while having no control over the original graph. Methodologically, we provide an approximation strategy to linearize the model we attack and then derive an approximate closed-from solution with a lower time cost. To have a fair comparison with existing attack methods that manipulate the original graph, we adapt them to the new attack scenario by injecting vicious nodes. Empirical experimental results show that our proposed attack method can significantly reduce the classification accuracy of GCNs and is much faster than existing methods without jeopardizing the attack performance. We have open-sourced the code of our method https://github.com/wangjhgithub/AFGSM.

中文翻译:

通过注入恶意节点,对图数据进行可扩展的攻击

最近的研究表明,图卷积网络(GCN)容易受到精心设计的攻击,该攻击旨在导致图上特定节点的错误分类,且扰动不明显。但是,由于它们的时间复杂度高,因此大多数现有作品无法处理大型图形。另外,现有的工作主要集中在操纵图上的现有节点,而在实践中,攻击者通常没有权限修改现有节点的信息。在本文中,我们开发了一种更具可扩展性的框架,称为“近似快速梯度符号方法”,该框架考虑了一种更实际的攻击方案,在这种情况下,攻击者只能将新的恶意节点注入到图中,而无法控制原始图。从方法上讲 我们提供了一种近似策略,可以线性化攻击的模型,然后以较低的时间成本得出近似的封闭解。为了与操纵原始图的现有攻击方法进行公平比较,我们通过注入恶意节点使它们适应新的攻击情况。实验结果表明,本文提出的攻击方法会大大降低GCN的分类准确率,并且在不影响攻击性能的情况下比现有方法要快得多。我们已经开源了我们方法https://github.com/wangjhgithub/AFGSM的代码。实验结果表明,本文提出的攻击方法会大大降低GCN的分类准确率,并且在不影响攻击性能的情况下比现有方法要快得多。我们已经开源了我们方法https://github.com/wangjhgithub/AFGSM的代码。实验结果表明,本文提出的攻击方法会大大降低GCN的分类准确率,并且在不影响攻击性能的情况下比现有方法要快得多。我们已经开源了我们方法https://github.com/wangjhgithub/AFGSM的代码。
更新日期:2020-06-17
down
wechat
bug