Skip to main content
Log in

Scalable attack on graph data by injecting vicious nodes

  • Published:
Data Mining and Knowledge Discovery Aims and scope Submit manuscript

Abstract

Recent studies have shown that graph convolution networks (GCNs) are vulnerable to carefully designed attacks, which aim to cause misclassification of a specific node on the graph with unnoticeable perturbations. However, a vast majority of existing works cannot handle large-scale graphs because of their high time complexity. Additionally, existing works mainly focus on manipulating existing nodes on the graph, while in practice, attackers usually do not have the privilege to modify information of existing nodes. In this paper, we develop a more scalable framework named Approximate Fast Gradient Sign Method which considers a more practical attack scenario where adversaries can only inject new vicious nodes to the graph while having no control over the original graph. Methodologically, we provide an approximation strategy to linearize the model we attack and then derive an approximate closed-from solution with a lower time cost. To have a fair comparison with existing attack methods that manipulate the original graph, we adapt them to the new attack scenario by injecting vicious nodes. Empirical experimental results show that our proposed attack method can significantly reduce the classification accuracy of GCNs and is much faster than existing methods without jeopardizing the attack performance. We have open-sourced the code of our method https://github.com/wangjhgithub/AFGSM.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. The problem is formulated as bi-level optimization because the perturbed test input is also used in the training procedure and the model weight is dependent on perturbed test data.

  2. There is a difference in the constraint for feature perturbations. As explained in Sect. 3.2, we do not have specific feature constraints (however, we do not allow the occurrence of pairs of features that do not exist in the original nodes) for the vicious nodes while in the original scenario, the number of feature perturbations cannot exceed a certain threshold.

  3. We record the actual run time on the same machine with configuration: CPU (i9-7900X, 3.30 GHz), 128 GB RAM.

References

  • Akoglu L, Tong H, Koutra D (2015) Graph based anomaly detection and description: a survey. Data Min Knowl Disc 29(3):626–688

    Article  MathSciNet  Google Scholar 

  • Athalye A, Carlini N, Wagner D (2018) Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. arXiv:1802.00420

  • Bhagat S, Cormode G, Muthukrishnan S (2011) Node classification in social networks. In: Social network data analytics. Springer, pp 115–148

  • Bojcheski A, Günnemann S (2018) Adversarial attacks on node embeddings. arXiv:1809.01093

  • Bojchevski A, Günnemann S (2018) Adversarial attacks on node embeddings via graph poisoning. arXiv:1809.01093

  • Bolton RJ, Hand DJ et al (2001) Unsupervised profiling methods for fraud detection. In: Credit scoring and credit control, vol VII. pp 235–255

  • Cai H, Zheng VW, Chang KCC (2018) A comprehensive survey of graph embedding: problems, techniques, and applications. IEEE Trans Knowl Data Eng 30(9):1616–1637

    Article  Google Scholar 

  • Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (SP). IEEE, pp 39–57

  • Chaoji V, Ranu S, Rastogi R, Bhatt R (2012) Recommendations to boost content spread in social networks

  • Chen J, Ma T, Xiao C (2018) Fastgcn: fast learning with graph convolutional networks via importance sampling. arXiv:1801.10247

  • Chen Y, Nadji Y, Kountouras A, Monrose F, Vasiloglou N (2017) Practical attacks against graph-based clustering

  • Csáji BC, Jungers RM, Blondel VD (2014) Pagerank optimization by edge selection. Discrete Appl Math 169(6):73–87

    Article  MathSciNet  Google Scholar 

  • Dai H, Li H, Tian T, Huang X, Wang L, Zhu J, Song L (2018) Adversarial attack on graph structured data. arXiv:1806.02371

  • Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J (2018) Boosting adversarial attacks with momentum. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9185–9193

  • Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. arXiv:1412.6572

  • Hamilton W, Ying Z, Leskovec J (2017) Inductive representation learning on large graphs. In: Advances in neural information processing systems, pp 1024–1034

  • Israeli E, Wood RK (2002) Shortest-path network interdiction. Netw Int J 40(2):97–111

    MathSciNet  MATH  Google Scholar 

  • Kipf TN, Welling M (2016) Semi-supervised classification with graph convolutional networks. arXiv:1609.02907

  • Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2017) Towards deep learning models resistant to adversarial attacks. In: International conference on learning representations, 2018

  • McCallum AK, Nigam K, Rennie J, Seymore K (2000) Automating the construction of internet portals with machine learning. Inf Retr 3(2):127–163

    Article  Google Scholar 

  • Monti F, Boscaini D, Masci J, Rodola E, Svoboda J, Bronstein MM (2017) Geometric deep learning on graphs and manifolds using mixture model cnns. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5115–5124

  • Perozzi B, Akoglu L, Iglesias Sánchez P, Müller E (2014a) Focused clustering and outlier detection in large attributed graphs. In: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. pp 1346–1355

  • Perozzi B, Al-Rfou R, Skiena S (2014b) Deepwalk: Online learning of social representations. In: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 701–710

  • Pham T, Tran T, Phung D, Venkatesh S (2017) Column networks for collective classification. In: Thirty-first AAAI conference on artificial intelligence

  • Phillips CA (1993) The network inhibition problem. In: ACM symposium on theory of computing

  • Sen P, Namata G, Bilgic M, Getoor L, Galligher B, Eliassi-Rad T (2008) Collective classification in network data. AI Mag 29(3):93–93

    Article  Google Scholar 

  • Sun Y, Wang S, Tang X, Hsieh TY, Honavar V (2019) Node injection attacks on graphs via reinforcement learning. arXiv:1909.06543

  • Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv:1312.6199

  • Tang J, Aggarwal C, Liu H (2016) Node classification in signed social networks. In: Proceedings of the 2016 SIAM international conference on data mining. SIAM, pp 54–62

  • Tian F, Gao B, Cui Q, Chen E, Liu TY (2014) Learning deep representations for graph clustering. In: Twenty-eighth AAAI conference on artificial intelligence

  • Watkins CJ, Dayan P (1992) Q-learning. Mach Learn 8(3–4):279–292

    MATH  Google Scholar 

  • Wu Z, Pan S, Chen F, Long G, Zhang C, Yu PS (2019) A comprehensive survey on graph neural networks. arXiv:1901.00596

  • Ying R, He R, Chen K, Eksombatchai P, Hamilton WL, Leskovec J (2018) Graph convolutional neural networks for web-scale recommender systems. In: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery and data mining. pp 974–983

  • Zhang D, Yin J, Zhu X, Zhang C (2019) Attributed network embedding via subspace discovery. arXiv:1901.00596v4

  • Zhao M, An B, Yu Y, Liu S, Pan SJ (2018) Data poisoning attacks on multi-task relationship learning. In: Thirty-second AAAI conference on artificial intelligence

  • Zhou J, Cui G, Zhang Z, Yang C, Liu Z, Wang L, Li C, Sun M (2018) Graph neural networks: a review of methods and applications. arXiv:1812.08434

  • Zügner D, Günnemann S (2019) Adversarial attacks on graph neural networks via meta learning. arXiv:1902.08412

  • Zügner D, Akbarnejad A, Günnemann S (2018) Adversarial attacks on neural networks for graph data. In: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 2847–2856

Download references

Acknowledgements

This work was supported by National Nature Science Foundation of China (No. 61872287 and No. 61532015), Innovative Research Group of the National Natural Science Foundation of China (No. 61721002), Innovation Research Team of Ministry of Education (IRT_17R86), and Project of China Knowledge Center for Engineering Science and Technology.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Minnan Luo.

Additional information

Responsible editor: Ira Assent, Carlotta Domeniconi, Aristides Gionis, Eyke Hüllermeier.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, J., Luo, M., Suya, F. et al. Scalable attack on graph data by injecting vicious nodes. Data Min Knowl Disc 34, 1363–1389 (2020). https://doi.org/10.1007/s10618-020-00696-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10618-020-00696-7

Keywords

Navigation