Skip to main content
Log in

ASFGNN: Automated separated-federated graph neural network

  • Published:
Peer-to-Peer Networking and Applications Aims and scope Submit manuscript

Abstract

Graph Neural Networks (GNNs) have achieved remarkable performance by taking advantage of graph data. The success of GNN models always depends on rich features and adjacent relationships. However, in practice, such data are usually isolated by different data owners (clients) and thus are likely to be Non-Independent and Identically Distributed (Non-IID). Meanwhile, considering the limited network status of data owners, hyper-parameters optimization for collaborative learning approaches is time-consuming in data isolation scenarios. To address these problems, we propose an Automated Separated-Federated Graph Neural Network (ASFGNN) learning paradigm. ASFGNN consists of two main components, i.e., the training of GNN and the tuning of hyper-parameters. Specifically, to solve the data Non-IID problem, we first propose a separated-federated GNN learning model, which decouples the training of GNN into two parts: the message passing part that is done by clients separately, and the loss computing part that is learnt by clients federally. To handle the time-consuming parameter tuning problem, we leverage Bayesian optimization technique to automatically tune the hyper-parameters of all the clients. We conduct experiments on benchmark datasets and the results demonstrate that ASFGNN significantly outperforms the naive federated GNN, in terms of both accuracy and parameter-tuning efficiency.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, Ghemawat S, Goodfellow I, Harp A, Irving G, Isard M, Jia Y, Jozefowicz R, Kaiser L, Kudlur M, Levenberg J, Mané D, Monga R, Moore S, Murray D, Olah C, Schuster M, Shlens J, Steiner B, Sutskever I, Talwar K, Tucker P, Vanhoucke V, Vasudevan V, Viégas F, Vinyals O, Warden P, Wattenberg M, Wicke M, Yu Y, Zheng X (2015) TensorFlow: Large-scale machine learning on heterogeneous systems. https://www.tensorflow.org/. Software available from tensorflow.org

  2. Abril PS, Plant R (2007) A comprehensive survey on graph neural networks. Commun ACM 50(1), 36–44. https://doi.org/10.1145/1188913.1188915

  3. Abuadbba S, Kim K, Kim M, Thapa C, Camtepe SA, Gao Y, Kim H, Nepal S (2020) Can we use split learning on 1d cnn models for privacy preserving training? arXiv:2003.12365

  4. Aono Y, Hayashi T, Trieu Phong L, Wang L (2016) Scalable and secure logistic regression via homomorphic encryption. In: CODASPY. ACM, pp 142–144

  5. Bojchevski A, Günnemann S (2017) Deep gaussian embedding of attributed graphs: Unsupervised inductive learning via ranking. arXiv:1707.03815

  6. Chen C, Zhou J, Wang L, Wu X, Fang W, Tan J, Wang L, Ji X, Liu A, Wang H (2020) When homomorphic encryption marries secret sharing: Secure large-scale sparse logistic regression and applications in risk control. arXiv:2008.08753

  7. Chen YW, Song Q, Hu X (2019) Techniques for automated machine learning

  8. Finn C, Abbeel P, Levine S (2017) Model-agnostic meta-learning for fast adaptation of deep networks

  9. Gao Y, Kim M, Abuadbba S, Kim Y, Thapa C, Kim K, Camtepe SA, Kim H, Nepal S (2020) End-to-end evaluation of federated learning and split learning for internet of things. arXiv:2003.13376

  10. Gu Z, Huang H, Zhang J, Su D, Lamba A, Pendarakis D, Molloy I (2019) Securing input data of deep learning inference systems via partitioned enclave execution. CoRR arXiv:1807.00969

  11. Gupta O, Raskar R (2018) Distributed learning of deep neural network over multiple agents. J Netw Comput Appl 116:1–8

  12. Hamilton W, Ying Z, Leskovec J (2017) Inductive representation learning on large graphs. In: NeurIPS, pp 1024–1034

  13. Kairouz P, McMahan H.B, Avent B, Bellet A, Bennis M, Bhagoji A.N, Bonawitz K, Charles Z, Cormode G, Cummings R, D’Oliveira RGL, Rouayheb SE, Evans D, Gardner J, Garrett Z, Gascón A, Ghazi B, Gibbons PB, Gruteser M, Harchaoui Z, He C, He L, Huo Z, Hutchinson B, Hsu J, Jaggi M, Javidi T, Joshi G, Khodak M, Konečný J, Korolova A, Koushanfar F, Koyejo S, Lepoint T, Liu Y, Mittal P, Mohri M, Nock R, Özgür A, Pagh R, Raykova M, Qi H, Ramage D, Raskar R, Song D, Song W, Stich SU, Sun Z, Suresh AT, Tramèr F, Vepakomma P, Wang J, Xiong L, Xu Z, Yang Q, Yu FX, Yu H, Zhao S (2019) Advances and open problems in federated learning

  14. Kipf TN, Welling M (2016) Semi-supervised classification with graph convolutional networks. CoRR arXiv:1609.02907

  15. Kipf TN, Welling M (2016) Semi-supervised classification with graph convolutional networks. arXiv:1609.02907

  16. Lin J, Wong SKM (1990) A new directed divergence measure and its characterization. Int J Gen Syst 17(1)L73–81.

  17. Lindauer M, Eggensperger K, Feurer M, Falkner S, Biedenkapp A, Hutter F (2017) Smac v3: Algorithm configuration in python. https://github.com/automl/SMAC3

  18. Liu Z, Chen C, Li L, Zhou J, Li X, Song L, Qi Y (2018) Geniepath: Graph neural networks with adaptive receptive paths

  19. Liu Z, Chen C, Yang X, Zhou J, Li X, Song L (2018) Heterogeneous graph neural networks for malicious account detection. In: Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM ’18. Association for Computing Machinery, New York, pp 2077–2085. https://doi.org/10.1145/3269206.3272010

  20. Lorenzo PR, Nalepa J, Ramos LS, Pastor JR (2017) Hyper-parameter selection in deep neural networks using parallel particle swarm optimization. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO ’17. Association for Computing Machinery, New York, pp 1864–1871. https://doi.org/10.1145/3067695.3084211

  21. McMahan HB, Moore E, Ramage D, y Arcas BA (2016) Federated learning of deep networks using model averaging. ArXiv:1602.05629

  22. McMahan HB, Moore E, Ramage D, Hampson S, y Arcas BA (2017) Communication-efficient learning of deep networks from decentralized data. In: AISTATS

  23. Mei G, Guo Z, Liu S, Pan L (2019) Sgnn: A graph neural network based federated learning approach by hiding structure. In: 2019 IEEE International Conference on Big Data (Big Data). IEEE, pp 2560–2568

  24. Perozzi B, Al-Rfou R, Skiena S (2014) Deepwalk: Online learning of social representations. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14. Association for Computing Machinery, New York, pp 701–710. https://doi.org/10.1145/2623330.2623732

  25. Scarselli F, Gori M, Tsoi AC, Hagenbuchner M, Monfardini G (2009) The graph neural network model. IEEE Trans Neural Netw 20(1):61–80

  26. Shamir A (1979) How to share a secret. Commun ACM 22(11):612–613

  27. Swersky K, Snoek J, Adams RP (2013) Multi-task bayesian optimization. In: Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS’13. Curran Associates Inc, Red Hook, pp 2004–2012

  28. Thakkar O, Andrew G, McMahan HB (2019) Differentially private learning with adaptive clipping

  29. Veličković P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y (2017) Graph attention networks

  30. Wang T, Zhu JY, Torralba A, Efros AA (2018) Dataset distillation

  31. Wang Y, Sun Y, Liu Z, Sarma SE, Bronstein MM, Solomon JM (2018) Dynamic graph cnn for learning on point clouds. arXiv:1801.07829

  32. Wu J, Chen XY, Zhang H, Xiong LD, Lei H, Deng SH (2019) Hyperparameter optimization for machine learning models based on bayesian optimizationb. J Electron Sci Technol 17(1):26–40. https://doi.org/10.11989/JEST.1674-862X.80904120, http://www.sciencedirect.com/science/article/pii/S1674862X19300047

  33. Ying R, He R, Chen K, Eksombatchai P, Hamilton WL, Leskovec J (2018) Graph convolutional neural networks for web-scale recommender systems. In: SIGKDD. ACM, pp 974–983

  34. Yu T, Zhu H (2020) Hyper-parameter optimization: A review of algorithms and applications. arXiv:2003.05689

  35. Zhao Y, Li M, Lai L, Suda N, Civin D, Chandra V (2018) Federated learning with non-iid data

  36. Zhou J, Chen C, Zheng L, Zheng X, Wu B, Liu Z, Wang L (2020) Privacy-preserving graph neural network for node classification. arXiv:2005.11903

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chaochao Chen.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the Topical Collection: Special Issue on Privacy-Preserving Computing

Guest Editors: Kaiping Xue, Zhe Liu, Haojin Zhu, Miao Pan and David S.L. Wei

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zheng, L., Zhou, J., Chen, C. et al. ASFGNN: Automated separated-federated graph neural network. Peer-to-Peer Netw. Appl. 14, 1692–1704 (2021). https://doi.org/10.1007/s12083-021-01074-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12083-021-01074-w

Keywords

Navigation