当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Good, The Bad, and The Ugly: Quality Inference in Federated Learning
arXiv - CS - Cryptography and Security Pub Date : 2020-07-13 , DOI: arxiv-2007.06236
Bal\'azs Pej\'o

Collaborative machine learning algorithms are developed both for efficiency reasons and to ensure the privacy protection of sensitive data used for processing. Federated learning is the most popular of these methods, where 1) learning is done locally, and 2) only a subset of the participants contribute in each training round. Despite of no data is shared explicitly, recent studies showed that models trained with FL could potentially still leak some information. In this paper we focus on the quality property of the datasets and investigate whether the leaked information could be connected to specific participants. Via a differential attack we analyze the information leakage using a few simple metrics, and show that reconstruction of the quality ordering among the training participants' datasets is possible. Our scoring rules are only using an oracle access to a test dataset and no further background information or computational power. We demonstrate two implications of such a quality ordering leakage: 1) we utilized it to increase the accuracy of the model by weighting the participant's updates, and 2) using it to detect misbehaving participants.

中文翻译:

好的、坏的和丑的:联邦学习中的质量推理

出于效率原因和确保用于处理的敏感数据的隐私保护,开发协作机器学习算法。联邦学习是这些方法中最受欢迎的,其中 1) 学习在本地完成,2) 只有一部分参与者在每轮训练中做出贡献。尽管没有明确共享数据,但最近的研究表明,用 FL 训练的模型仍有可能泄漏一些信息。在本文中,我们关注数据集的质量属性,并调查泄露的信息是否可以与特定参与者相关联。通过差异攻击,我们使用一些简单的指标分析信息泄漏,并表明在训练参与者的数据集之间重建质量排序是可能的。我们的评分规则仅使用预言机访问测试数据集,没有进一步的背景信息或计算能力。我们证明了这种质量排序泄漏的两个含义:1)我们利用它通过加权参与者的更新来提高模型的准确性,以及 2)使用它来检测行为不端的参与者。
更新日期:2020-07-14
down
wechat
bug