Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Perspective
  • Published:

Adversarial interference and its mitigations in privacy-preserving collaborative machine learning

Abstract

Despite the rapid increase of data available to train machine-learning algorithms in many domains, several applications suffer from a paucity of representative and diverse data. The medical and financial sectors are, for example, constrained by legal, ethical, regulatory and privacy concerns preventing data sharing between institutions. Collaborative learning systems, such as federated learning, are designed to circumvent such restrictions and provide a privacy-preserving alternative by eschewing data sharing and relying instead on the distributed remote execution of algorithms. However, such systems are susceptible to malicious adversarial interference attempting to undermine their utility or divulge confidential information. Here we present an overview and analysis of current adversarial attacks and their mitigations in the context of collaborative machine learning. We discuss the applicability of attack vectors to specific learning contexts and attempt to formulate a generic foundation for adversarial influence and mitigation mechanisms. We moreover show that a number of context-specific learning conditions are exploited in similar fashion across all settings. Lastly, we provide a focused perspective on open challenges and promising areas of future research in the field.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Overview of the attacks.

Similar content being viewed by others

References

  1. Radley-Gardner, O., Beale, H. & Zimmermann, R. (eds) Fundamental Texts On European Private Law (Hart Publishing, 2016); http://www.bloomsburycollections.com/book/fundamental-texts-on-european-private-law-1

  2. Health Insurance Portability and Accountability Act (CDC, 2020).

  3. Drainakis, G., Katsaros, K. V., Pantazopoulos, P., Sourlas, V. & Amditis, A. Federated vs. centralized machine learning under privacy-elastic users: a comparative analysis. In 2020 IEEE 19th International Symposium on Network Computing and Applications (NCA) 1–8 (IEEE, 2020); https://doi.org/10.1109/nca51143.2020.9306745

  4. Kaissis, G. A., Makowski, M. R., Rückert, D. & Braren, R. F. Secure, privacy-preserving and federated machine learning in medical imaging. Nat. Mach. Intell. 2, 305–311 (2020).

    Article  Google Scholar 

  5. McMahan, H. B., Moore, E., Ramage, D., Hampson, S. & Agüera y Arcas, B. In Proc. 20th International Conference on Artificial Intelligence and Statistics Vol. 54 (eds Sing, A. & Zhu, J.) 1273–1282 (PMLR, 2017)

  6. Warnat-Herresthal, S. et al. Swarm learning for decentralized and confidential clinical machine learning. Nature 594, 265–270 (2021).

    Article  Google Scholar 

  7. Vepakomma, P., Gupta, O., Swedish, T. & Raskar, R. Split learning for health: distributed deep learning without sharing raw patient data. Preprint at https://arxiv.org/abs/1812.00564 (2018).

  8. Brundage, M. et al. Toward trustworthy AI development: mechanisms for supporting verifiable claims. Preprint at https://arxiv.org/abs/2004.07213 (2020).

  9. Jere, M. S., Farnan, T. & Koushanfar, F. A taxonomy of attacks on federated learning. IEEE Secur. Priv. 19, 20–28 (2021).

    Article  Google Scholar 

  10. Evans, D., Kolesnikov, V. & Rosulek, M. A pragmatic introduction to secure multi-party computation. Found. Trends Priv. Secur. 2, 70–246 (2018).

    Article  Google Scholar 

  11. Riazi, M. S. & Koushanfar, F. Privacy-preserving deep learning and inference. In 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) 1–4 (IEEE, 2018).

  12. Fredrikson, M. et al. Privacy in pharmacogenetics: an end-to-end case study of personalized warfarin dosing. In Proc. 23rd USENIX Security Symposium 14 17–32 (USENIX, 2014).

  13. Ganju, K., Wang, Q., Yang, W., Gunter, C. A. & Borisov, N. Property inference attacks on fully connected neural networks using permutation invariant representations. In Proc. 2018 ACM SIGSAC Conference on Computer and Communications Security 619-633 (ACM, 2018); https://doi.org/10.1145/3243734.3243834

  14. Mansourifar, H. & Shi, W. Vulnerability of face recognition systems against composite face reconstruction attack. Preprint at http://arxiv.org/abs/2009.02286 (2020).

  15. Long, Y., Bindschaedler, V. & Gunter, C. A. Towards measuring membership privacy. Preprint at http://arxiv.org/abs/1712.09136 (2017).

  16. He, Y., Rahimian, S., Schiele, B. & Fritz, M. In Computer Vision – ECCV 2020: Lecture Notes in Computer Science Vol. 12368 (eds Vedaldi, A. et al.) 519–535 (Springer, 2020).

  17. Shokri, R., Stronati, M., Song, C. & Shmatikov, V. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP) 3–18 (IEEE, 2017).

  18. Fang, M., Cao, X., Jia, J. & Gong, N. Local model poisoning attacks to Byzantine-robust federated learning. In Proc. 29th USENIX Security Symposium 20 1605–1622 (USENIX, 2020).

  19. Bhagoji, A. N., Chakraborty, S., Mittal, P. & Calo, S. In International Conference on Machine Learning 634–643 (PMLR, 2019).

  20. Hayes, J. & Ohrimenko, O. Contamination Attacks and Mitigation in Multi-Party Machine Learning (NeurIPS, 2018).

  21. Chang, H., Shejwalkar, V., Shokri, R. & Houmansadr, A. Cronus: Robust and heterogeneous collaborative learning with black-box knowledge transfer. Preprint at http://arxiv.org/abs/1912.11279 (2019).

  22. Wenger, E., Passananti, J., Yao, Y., Zheng, H. & Zhao, B. Y. Backdoor Attacks on Facial Recognition in the Physical World (CVPR, 2021).

  23. Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D. & Shmatikov, V. How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics 2938–2948 (PMLR, 2020).

  24. Bagdasaryan, E. & Shmatikov, V. Blind Backdoors in Deep Learning Models (USENIX Security, 2021).

  25. Biggio, B. et al. Evasion attacks against machine learning at test time. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases 387–402 (Springer, 2013).

  26. Chernikova, A., Oprea, A., Nita-Rotaru, C. & Kim, B. Are self-driving cars secure? Evasion attacks against deep neural networks for steering angle prediction. In 2019 IEEE Security and Privacy Workshops (SPW) 132–137 (IEEE, 2019).

  27. Yan, M., Fletcher, C. W. & Torrellas, J. Cache telepathy: leveraging shared resource attacks to learn DNN architectures. In Proc. 29th USENIX Security Symposium 20 2003–2020 (USENIX, 2020).

  28. Timon, B. Non-profiled deep learning-based side-channel attacks with sensitivity analysis. IACR Trans. Cryptogr. Hardw. Embed. Syst. 2019, 107–131 (2019).

    Article  Google Scholar 

  29. Leino, K. & Fredrikson, M. Stolen memories: leveraging model memorization for calibrated white-box membership inference. In Proc. 29th USENIX Security Symposium 20 1605–1622 (USENIX, 2020).

  30. Rahman, M. A., Rahman, T., Laganière, R., Mohammed, N. & Wang, Y. Membership inference attack against differentially private deep learning model. Trans. Data Priv. 11, 61–79 (2018).

    Google Scholar 

  31. Song, L. & Mittal, P. Systematic evaluation of privacy risks of machine learning models. In Proc. 30th USENIX Security Symposium 21 2615–2632 (USENIX, 2021).

  32. Choo, C. A. C., Tramer, F., Carlini, N. & Papernot, N. In International Conference on Machine Learning 1964–1974 (PMLR, 2021).

  33. Kaya, Y., Hong, S. & Dumitras, T. On the effectiveness of regularization against membership inference attacks. Preprint at https://arxiv.org/abs/2006.05336 (2020).

  34. Park, Y. & Kang, M. Membership inference attacks against object detection models. Preprint at http://arxiv.org/abs/2001.04011 (2020).

  35. Salem, A. et al. Ml-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models (NDSS, 2019).

  36. Long, Y. et al. Understanding membership inferences on well-generalized learning models. Preprint at https://arxiv.org/abs/1802.04889 (2018).

  37. Hayes, J., Melis, L., Danezis, G. & De Cristofaro, E. Logan: membership inference attacks against generative models. Proc. Priv. Enhanc. Technol. 2019, 133–152 (2019).

    Google Scholar 

  38. Samani, S. S. et al. Quantifying genomic privacy via inference attack with high-order SNV correlations. In 2015 IEEE Security and Privacy Workshops 32–40 (IEEE, 2015); https://ieeexplore.ieee.org/document/7163206/

  39. Wu, M. et al. Evaluation of inference attack models for deep learning on medical data. Preprint at http://arxiv.org/abs/2011.00177 (2020).

  40. Nasr, M., Shokri, R. & Houmansadr, A. Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In 2019 IEEE Symposium on Security and Privacy (SP) 739–753 (IEEE, 2019).

  41. Luo, X. & Zhu, X. Exploiting defenses against GAN-based feature inference attacks in federated learning. Preprint at https://arxiv.org/abs/2004.12571 (2020).

  42. Melis, L., Song, C., Cristofaro, E. D. & Shmatikov, V. Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE Symposium on Security and Privacy (SP) 691–706 (IEEE, 2019).

  43. He, Z., Zhang, T. & Lee, R. B. Model inversion attacks against collaborative inference. In Proc. 35th Annual Computer Security Applications Conference 148–162 (ACM, 2019).

  44. Hitaj, B., Ateniese, G. & Perez-Cruz, F. Deep models under the GAN: information leakage from collaborative deep learning. In Proc. 2017 ACM SIGSAC Conference on Computer and Communications Security 603–618 (ACM, 2017).

  45. Zhang, Y. et al. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 253–261 (CVPR, 2020).

  46. Zhao, B., Mopuri, K. R. & Bilen, H. iDLG: Improved Deep Leakage from Gradients. Preprint at https://arxiv.org/abs/2001.02610 (2020).

  47. Zhu, L., Liu, Z. & Han, S. Deep leakage from gradients. Adv. Neural Inf. Process. Syst. 32, 14747–14756 (2019).

    Google Scholar 

  48. Geiping, J., Bauermeister, H., Dröge, H. & Moeller, M. Inverting gradients—how easy is it to break privacy in federated learning? Preprint at https://arxiv.org/abs/2003.14053 (2020).

  49. Kaissis, G. et al. End-to-end privacy preserving deep learning on multi-institutional medical imaging. Nat. Mach. Intell. https://doi.org/10.1038/s42256-021-00337-8 (2021).

  50. Jagielski, M., Carlini, N., Berthelot, D., Kurakin, A. & Papernot, N. High accuracy and high fidelity extraction of neural networks. In Proc. 29th USENIX Security Symposium 20 (USENIX, 2020).

  51. Oh, S. J., Schiele, B. & Fritz, M. in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning (eds Samek, W. et al.) 121–144 (Springer, 2019).

  52. Chen, D., Yu, N., Zhang, Y. & Fritz, M. Gan-leaks: A taxonomy of membership inference attacks against generative models. In Proc. 2020 ACM SIGSAC Conference on Computer and Communications Security 343–362 (ACM, 2020).

  53. Jagielski, M. et al. Manipulating machine learning: poisoning attacks and countermeasures for regression learning. In 2018 IEEE Symposium on Security and Privacy (SP) 19–35 (IEEE, 2018).

  54. Mozaffari-Kermani, M., Sur-Kolay, S., Raghunathan, A. & Jha, N. K. Systematic poisoning attacks on and defenses for machine learning in healthcare. IEEE J. Biomed. Health Informatics 19, 1893–1905 (2014).

    Article  Google Scholar 

  55. Wang, H. et al. Attack of the tails: yes, you really can backdoor federated learning. Adv. Neural Inf. Process. Syst. 33, 1–15 (2020).

    Google Scholar 

  56. Narodytska, N. & Kasiviswanathan, S. P. Simple Black-Box Adversarial Attacks on Deep Neural Networks Vol. 2 (CVPR Workshops, 2017).

  57. Goodfellow, I. J., Shlens, J. & Szegedy, C. Explaining and Harnessing Adversarial Examples (ICLR, 2014).

  58. Madry, A., Makelov, A., Schmidt, L., Tsipras, D. & Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks (ICLR, 2018).

  59. Wang, J. & Zhang, H. Bilateral adversarial training: towards fast training of more robust models against adversarial attacks. In Proc. IEEE International Conference on Computer Vision 6629–6638 (IEEE, 2019).

  60. Zhang, C., Bengio, S., Hardt, M., Recht, B. & Vinyals, O. Understanding deep learning (still) requires rethinking generalization. Comms ACM 64, 107–115 (2016).

    Article  Google Scholar 

  61. Shokri, R., Stronati, M., Song, C. & Shmatikov, V. Membership Inference Attacks Against Machine Learning Models (IEEE, 2017).

  62. Hinton, G., Vinyals, O. & Dean, J. Distilling the Knowledge in a Neural Network (NIPS, 2014).

  63. Papernot, N., McDaniel, P., Sinha, A. & Wellman, M. Towards the science of security and privacy in machine learning. Preprint at https://arxiv.org/abs/1611.03814 (2016).

  64. Papernot, N., Abadi, M., Erlingsson, U., Goodfellow, I. & Talwar, K. Semi-Supervised Knowledge Transfer for Deep Learning from Private Training Data (ICLR, 2017).

  65. Papernot, N. et al. Scalable Private Learning with Pate (ICLR, 2018).

  66. Fay, D., Sjölund, J. & Oechtering, T. J. Decentralized differentially private segmentation with pate. Preprint at https://arxiv.org/abs/2004.06567 (2020).

  67. Müftüoğlu, Z., Kizrak, M. A. & Yildlnm, T. Differential privacy practice on diagnosis of COVID-19 radiology imaging using EfficientNet. In 2020 International Conference on INnovations in Intelligent Systems and Applications (INISTA) 1–6 (IEEE, 2020).

  68. Dhillon, G. S. et al. Stochastic activation pruning for robust adversarial defense. Preprint at https://arxiv.org/abs/1803.01442 (2018).

  69. Song, L., Shokri, R. & Mittal, P. Membership inference attacks against adversarially robust deep learning models. In 2019 IEEE Security and Privacy Workshops (SPW) 50-56 (IEEE, 2019).

  70. Xie, C., Koyejo, S. & Gupta, I. In International Conference on Machine Learning 10495–10503 (PMLR, 2020).

  71. Bau, D. et al. Understanding the role of individual units in a deep neural network. Proc. Natl Acad. Sci. USA 117, 30071–30078 (2020).

    Article  Google Scholar 

  72. Fu, Y., Wang, H., Xu, K., Mi, H. & Wang, Y. Mixup based privacy preserving mixed collaboration learning. In 2019 IEEE International Conference on Service-Oriented System Engineering (SOSE) 275–2755 (IEEE, 2019).

  73. Vepakomma, P., Tonde, C. & Elgammal, A. et al. Supervised dimensionality reduction via distance correlation maximization. Electron. J. Stat. 12, 960–984 (2018).

    Article  MathSciNet  Google Scholar 

  74. Yin, D., Chen, Y., Ramchandran, K. & Bartlett, P. In International Conference on Machine Learning 5650–5659 (PMLR, 2018).

  75. Steinhardt, J., Koh, P. W. W. & Liang, P. S. Certified defenses for data poisoning attacks. Adv. Neural Inf. Process. Syst. 31, 3517–3529 (2017).

    Google Scholar 

  76. Lee, K., Lee, K., Lee, H. & Shin, J. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. Adv. Neural Inf. Process. Syst. 31, 7167–7177 (2018).

    Google Scholar 

  77. Metzen, J. H., Genewein, T., Fischer, V. & Bischoff, B. On Detecting Adversarial Perturbations (ICLR, 2017).

  78. Meng, D. & Chen, H. Magnet: a two-pronged defense against adversarial examples. In Proc. 2017 ACM SIGSAC Conference on Computer and Communications Security 135–147 (ACM, 2017).

  79. Blanchard, P., Guerraoui, R. & Stainer, J. et al. Machine learning with adversaries: Byzantine tolerant gradient descent. Adv. Neural Inf. Process. Syst. 31, 119–129 (2017).

    Google Scholar 

  80. Baruch, G., Baruch, M. & Goldberg, Y. A little is enough: circumventing defenses for distributed learning. Adv. Neural Inf. Process. Syst. 32, 8635–8645 (2019).

    Google Scholar 

  81. Mhamdi, E. M. E., Guerraoui, R. & Rouault, S. In International Conference on Machine Learning 3521–3530 (PMLR, 2018).

  82. Levine, A. & Feizi, S. (De)randomized Smoothing for Certifiable Defense Against Patch Attacks (NeurIPS, 2020).

  83. Gilmer, J. et al. In International Conference on Machine Learning 2280–2289 (PMLR, 2019).

  84. Pinot, R., Ettedgui, R., Rizk, G., Chevaleyre, Y. & Atif, J. In International Conference on Machine Learning 7717–7727 (PMLR, 2020).

  85. Mejia, F. A. et al. Robust or private? adversarial training makes models more vulnerable to privacy attacks. Preprint at https://arxiv.org/abs/1906.06449 (2019).

  86. Nissenbaum, H. Privacy in Context: Technology, Policy, and the Integrity of Social Life (Stanford Univ. Press, 2009).

  87. Trask, A., Bluemke, E., Garfinkel, B., Cuervas-Mons, C. G. & Dafoe, A. Beyond privacy trade-offs with structured transparency. Preprint at https://arxiv.org/abs/2012.08347 (2020).

  88. Roy, A. G., Siddiqui, S., Pölsterl, S., Navab, N. & Wachinger, C. Braintorrent: a peer-to-peer environment for decentralized federated learning. Preprint at https://arxiv.org/abs/1905.06731 (2019).

  89. Wang, J., Cheng, Y., Li, Q. & Jiang, Y. Interface-based side channel attack against intel SGX. Preprint at https://arxiv.org/abs/1811.05378 (2018).

  90. Liu, F., Yarom, Y., Ge, Q., Heiser, G. & Lee, R. B. Last-level cache side-channel attacks are practical. In 2015 IEEE Symposium on Security and Privacy 605–622 (IEEE, 2015).

  91. Muñoz-González, L., Co, K. T. & Lupu, E. C. Byzantine-robust federated machine learning through adaptive model averaging. Preprint at https://arxiv.org/abs/1909.05125 (2019).

  92. Suciu, O., Marginean, R., Kaya, Y., Daume, H. III & Dumitras, T. When does machine learning fail? Generalized transferability for evasion and poisoning attacks. In Proc. 27th USENIX Security Symposium 18 1299–1316 (USENIX, 2018).

  93. Chen, X., Liu, C., Li, B., Lu, K. & Song, D. Targeted backdoor attacks on deep learning systems using data poisoning. Preprint at https://arxiv.org/abs/1712.05526 (2017).

  94. Hesamifard, E., Takabi, H. & Ghasemi, M. Cryptodl: deep neural networks over encrypted data. Preprint at https://arxiv.org/abs/1711.05189 (2017).

  95. Gilad-Bachrach, R. et al. Cryptonets: applying neural networks to encrypted data with high throughput and accuracy. In International Conference on Machine Learning 201–210 (PMLR, 2016).

  96. Mohassel, P. & Zhang, Y. SecureML: a system for scalable privacy-preserving machine learning. In 2017 IEEE Symposium on Security and Privacy (SP) 19–38 (IEEE, 2017).

  97. Juvekar, C., Vaikuntanathan, V. & Chandrakasan, A. GAZELLE: a low latency framework for secure neural network inference. In Proc. 27th USENIX Security Symposium 18 1651–1669 (USENIX, 2018).

  98. Goldreich, O., Micali, S. & Wigderson, A. In Providing Sound Foundations for Cryptography: On the Work of Shafi Goldwasser and Silvio Micali 307–328 (ACM Books, 2019).

  99. Rouhani, B. D., Riazi, M. S. & Koushanfar, F. Deepsecure: scalable provably-secure deep learning. In Proc. 55th Annual Design Automation Conference 2 (ACM, 2018).

  100. Costan, V. & Devadas, S. Intel SGX Explained. IACR Cryptol. ePrint Archive 2016, 1–118 (2016).

    Google Scholar 

  101. Ohrimenko, O. et al. Oblivious multi-party machine learning on trusted processors. In Proc. 25th USENIX Security Symposium 16 619–636 (USENIX, 2016).

  102. Dessouky, G., Frassetto, T. & Sadeghi, A.-R. HybCache: hybrid side-channel-resilient caches for trusted execution environments. In Proc. 29th USENIX Security Symposium 20 451–468 (USENIX, 2020).

  103. Sattler, F., Wiedemann, S., Müller, K.-R. & Samek, W. Robust and communication-efficient federated learning from non-iid data. IEEE Trans. Neural Netw. Learn. Syst. (2019).

  104. Cisse, M., Bojanowski, P., Grave, E., Dauphin, Y. & Usunier, N. In International Conference on Machine Learning 854–863 (PMLR, 2017).

  105. Lecuyer, M., Atlidakis, V., Geambasu, R., Hsu, D. & Jana, S. Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy (SP) 656–672 (IEEE, 2019).

  106. Choudhury, O. et al. Differential Privacy-Enabled Federated Learning for Sensitive Health Data (NeurIPS, 2019).

  107. Wu, B. et al. P3SGD: Patient privacy preserving SGD for regularizing deep CNNs in pathological image classification. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 2099–2108 (IEEE, 2019).

  108. McMahan, H. B. et al. A General Approach to Adding Differential Privacy to Iterative Training Procedures (NeurIPS, 2018).

  109. Xu, M., Papadimitriou, A., Feldman, A. & Haeberlen, A. Using differential privacy to efficiently mitigate side channels in distributed analytics. In Proc. 11th European Workshop on Systems Security 1–6 (ACM, 2018).

Download references

Acknowledgements

We thank the OpenMined community for its support. Funding: G.K. received funding from the Technical University of Munich, School of Medicine Clinician Scientist Programme (KKF), project reference H14. D.U. received funding from the Technical University of Munich/Imperial College London Joint Academy for Doctoral Studies. This research was supported by the UK Research and Innovation London Medical Imaging and Artificial Intelligence Centre for Value Based Healthcare. The funders played no role in the design of the study, the preparation of the manuscript or the decision to publish.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Georgios Kaissis.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review informationNature Machine Intelligence thanks Xiaoxiao Li, Tushar Semwal and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Usynin, D., Ziller, A., Makowski, M. et al. Adversarial interference and its mitigations in privacy-preserving collaborative machine learning. Nat Mach Intell 3, 749–758 (2021). https://doi.org/10.1038/s42256-021-00390-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-021-00390-3

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics