当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Targeted Universal Attack on Graph Convolutional Network
arXiv - CS - Machine Learning Pub Date : 2020-11-29 , DOI: arxiv-2011.14365
Jiazhu Dai, Weifeng Zhu, Xiangfeng Luo

Graph-structured data exist in numerous applications in real life. As a state-of-the-art graph neural network, the graph convolutional network (GCN) plays an important role in processing graph-structured data. However, a recent study reported that GCNs are also vulnerable to adversarial attacks, which means that GCN models may suffer malicious attacks with unnoticeable modifications of the data. Among all the adversarial attacks on GCNs, there is a special kind of attack method called the universal adversarial attack, which generates a perturbation that can be applied to any sample and causes GCN models to output incorrect results. Although universal adversarial attacks in computer vision have been extensively researched, there are few research works on universal adversarial attacks on graph structured data. In this paper, we propose a targeted universal adversarial attack against GCNs. Our method employs a few nodes as the attack nodes. The attack capability of the attack nodes is enhanced through a small number of fake nodes connected to them. During an attack, any victim node will be misclassified by the GCN as the attack node class as long as it is linked to them. The experiments on three popular datasets show that the average attack success rate of the proposed attack on any victim node in the graph reaches 83% when using only 3 attack nodes and 6 fake nodes. We hope that our work will make the community aware of the threat of this type of attack and raise the attention given to its future defense.

中文翻译:

图卷积网络的有针对性的通用攻击

图结构化数据在现实生活中存在于众多应用中。作为一种最新的图神经网络,图卷积网络(GCN)在处理图结构数据中起着重要作用。但是,最近的一项研究报告说,GCN也容易受到对抗性攻击,这意味着GCN模型可能会遭受恶意攻击,并且数据的修改不明显。在对GCN的所有对抗攻击中,有一种特殊的攻击方法称为通用对抗攻击,该方法会产生可应用于任何样本的扰动,并导致GCN模型输出错误的结果。尽管已经广泛研究了计算机视觉中的通用对抗攻击,但是针对图结构化数据的通用对抗攻击的研究工作很少。在本文中,我们提出针对GCN的针对性普遍对抗攻击。我们的方法采用几个节点作为攻击节点。攻击节点的攻击能力通过连接到它们的少量假节点来增强。在攻击过程中,只要受害节点被链接到GCN,它们就会被GCN误分类为攻击节点类。在三个流行的数据集上进行的实验表明,仅使用3个攻击节点和6个虚假节点时,对图中任意受害节点的攻击的平均攻击成功率达到83%。我们希望我们的工作将使社区意识到这种攻击的威胁,并引起人们对其未来防御的关注。攻击节点的攻击能力通过连接到它们的少量假节点来增强。在攻击期间,只要受害节点链接到受害节点,则GCN会将其分类为攻击节点类。在三个流行的数据集上进行的实验表明,仅使用3个攻击节点和6个虚假节点时,对图中任意受害节点的攻击的平均攻击成功率达到83%。我们希望我们的工作将使社区意识到这种攻击的威胁,并引起人们对其未来防御的关注。攻击节点的攻击能力通过连接到它们的少量假节点来增强。在攻击期间,只要受害节点链接到受害节点,则GCN会将其分类为攻击节点类。在三个流行的数据集上进行的实验表明,仅使用3个攻击节点和6个虚假节点时,对图中任意受害节点的攻击的平均攻击成功率达到83%。我们希望我们的工作将使社区意识到这种攻击的威胁,并引起人们对其未来防御的关注。在三个流行的数据集上进行的实验表明,仅使用3个攻击节点和6个虚假节点时,对图中任意受害节点的攻击的平均攻击成功率达到83%。我们希望我们的工作将使社区意识到这种攻击的威胁,并引起人们对其未来防御的关注。在三个流行的数据集上进行的实验表明,仅使用3个攻击节点和6个虚假节点时,对图中任意受害节点的攻击的平均攻击成功率达到83%。我们希望我们的工作将使社区意识到这种攻击的威胁,并引起人们对其未来防御的关注。
更新日期:2020-12-01
down
wechat
bug