当前位置: X-MOL 学术Neural Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Dual Robust Graph Neural Network Against Graph Adversarial Attacks
Neural Networks ( IF 7.8 ) Pub Date : 2024-03-28 , DOI: 10.1016/j.neunet.2024.106276
Qian Tao , Jianpeng Liao , Enze Zhang , Lusi Li

Graph Neural Networks (GNNs) have gained widespread usage and achieved remarkable success in various real-world applications. Nevertheless, recent studies reveal the vulnerability of GNNs to graph adversarial attacks that fool them by modifying graph structure. This vulnerability undermines the robustness of GNNs and poses significant security and privacy risks across various applications. Hence, it is crucial to develop robust GNN models that can effectively defend against such attacks. One simple approach is to remodel the graph. However, most existing methods cannot fully preserve the similarity relationship among the original nodes while learning the node representation required for reweighting the edges. Furthermore, they lack supervision information regarding adversarial perturbations, hampering their ability to recognize adversarial edges. To address these limitations, we propose a novel Dual Robust Graph Neural Network (DualRGNN) against graph adversarial attacks. DualRGNN first incorporates a node-similarity-preserving graph refining (SPGR) module to prune and refine the graph based on the learned node representations, which contain the original nodes’ similarity relationships, weakening the poisoning of graph adversarial attacks on graph data. DualRGNN then employs an adversarial-supervised graph attention (ASGAT) network to enhance the model’s capability in identifying adversarial edges by treating these edges as supervised signals. Through extensive experiments conducted on four benchmark datasets, DualRGNN has demonstrated remarkable robustness against various graph adversarial attacks.

中文翻译:

对抗图对抗攻击的双鲁棒图神经网络

图神经网络(GNN)已得到广泛使用,并在各种实际应用中取得了显着的成功。然而,最近的研究揭示了 GNN 容易受到图对抗性攻击的影响,这些攻击会通过修改图结构来欺骗它们。该漏洞破坏了 GNN 的稳健性,并在各种应用程序中带来了重大的安全和隐私风险。因此,开发能够有效防御此类攻击的强大 GNN 模型至关重要。一种简单的方法是重构图。然而,大多数现有方法在学习重新加权边缘所需的节点表示时不能完全保留原始节点之间的相似关系。此外,他们缺乏有关对抗性扰动的监督信息,从而妨碍了他们识别对抗性边缘的能力。为了解决这些限制,我们提出了一种新颖的双鲁棒图神经网络(DualRGNN)来对抗图对抗攻击。 DualRGNN 首先采用节点相似性保留图精炼(SPGR)模块,根据学习到的节点表示来修剪和精炼图,其中包含原始节点的相似关系,削弱图对抗攻击对图数据的毒害。然后,DualRGNN 采用对抗性监督图注意 (ASGAT) 网络,通过将对抗性边缘视为监督信号来增强模型识别对抗性边缘的能力。通过在四个基准数据集上进行的广泛实验,DualRGNN 表现出了针对各种图对抗攻击的卓越鲁棒性。
更新日期:2024-03-28
down
wechat
bug