当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Scalable Adversarial Attack on Graph Neural Networks with Alternating Direction Method of Multipliers
arXiv - CS - Machine Learning Pub Date : 2020-09-22 , DOI: arxiv-2009.10233
Boyuan Feng, Yuke Wang, Xu Li, and Yufei Ding

Graph neural networks (GNNs) have achieved high performance in analyzing graph-structured data and have been widely deployed in safety-critical areas, such as finance and autonomous driving. However, only a few works have explored GNNs' robustness to adversarial attacks, and their designs are usually limited by the scale of input datasets (i.e., focusing on small graphs with only thousands of nodes). In this work, we propose, SAG, the first scalable adversarial attack method with Alternating Direction Method of Multipliers (ADMM). We first decouple the large-scale graph into several smaller graph partitions and cast the original problem into several subproblems. Then, we propose to solve these subproblems using projected gradient descent on both the graph topology and the node features that lead to considerably lower memory consumption compared to the conventional attack methods. Rigorous experiments further demonstrate that SAG can significantly reduce the computation and memory overhead compared with the state-of-the-art approach, making SAG applicable towards graphs with large size of nodes and edges.

中文翻译:

乘法器交替方向法对图神经网络的可扩展对抗性攻击

图神经网络 (GNN) 在分析图结构数据方面取得了高性能,并已广泛应用于金融和自动驾驶等安全关键领域。然而,只有少数作品探索了 GNN 对对抗性攻击的鲁棒性,并且它们的设计通常受到输入数据集规模的限制(即专注于只有数千个节点的小图)。在这项工作中,我们提出了 SAG,这是第一种具有乘法器交替方向法 (ADMM) 的可扩展对抗性攻击方法。我们首先将大规模图解耦为几个较小的图分区,并将原始问题转换为几个子问题。然后,我们建议在图拓扑和节点特征上使用投影梯度下降来解决这些子问题,与传统攻击方法相比,这导致显着降低的内存消耗。严格的实验进一步证明,与最先进的方法相比,SAG 可以显着减少计算和内存开销,使 SAG 适用于具有大尺寸节点和边的图。
更新日期:2020-09-23
down
wechat
bug