当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Information Stealing in Federated Learning Systems Based on Generative Adversarial Networks
arXiv - CS - Cryptography and Security Pub Date : 2021-08-02 , DOI: arxiv-2108.00701
Yuwei Sun, Ng Chong, Hideya Ochiai

An attack on deep learning systems where intelligent machines collaborate to solve problems could cause a node in the network to make a mistake on a critical judgment. At the same time, the security and privacy concerns of AI have galvanized the attention of experts from multiple disciplines. In this research, we successfully mounted adversarial attacks on a federated learning (FL) environment using three different datasets. The attacks leveraged generative adversarial networks (GANs) to affect the learning process and strive to reconstruct the private data of users by learning hidden features from shared local model parameters. The attack was target-oriented drawing data with distinct class distribution from the CIFAR- 10, MNIST, and Fashion-MNIST respectively. Moreover, by measuring the Euclidean distance between the real data and the reconstructed adversarial samples, we evaluated the performance of the adversary in the learning processes in various scenarios. At last, we successfully reconstructed the real data of the victim from the shared global model parameters with all the applied datasets.

中文翻译:

基于生成对抗网络的联邦学习系统中的信息窃取

对智能机器协作解决问题的深度学习系统的攻击可能导致网络中的节点在关键判断上出错。同时,人工智能的安全和隐私问题也引起了多学科专家的关注。在这项研究中,我们成功地使用三个不同的数据集对联邦学习 (FL) 环境发起了对抗性攻击。这些攻击利用生成对抗网络 (GAN) 来影响学习过程,并通过从共享的本地模型参数中学习隐藏特征来努力重建用户的私人数据。攻击是面向目标的绘图数据,其类别分布分别来自 CIFAR-10、MNIST 和 Fashion-MNIST。而且,通过测量真实数据和重建的对抗样本之间的欧几里德距离,我们评估了对手在各种场景下的学习过程中的表现。最后,我们成功地从共享的全局模型参数以及所有应用的数据集重建了受害者的真实数据。
更新日期:2021-08-03
down
wechat
bug