当前位置: X-MOL 学术arXiv.cs.MA › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Faithful Edge Federated Learning: Scalability and Privacy
arXiv - CS - Multiagent Systems Pub Date : 2021-06-30 , DOI: arxiv-2106.15905
Meng Zhang, Ermin Wei, Randall Berry

Federated learning enables machine learning algorithms to be trained over a network of multiple decentralized edge devices without requiring the exchange of local datasets. Successfully deploying federated learning requires ensuring that agents (e.g., mobile devices) faithfully execute the intended algorithm, which has been largely overlooked in the literature. In this study, we first use risk bounds to analyze how the key feature of federated learning, unbalanced and non-i.i.d. data, affects agents' incentives to voluntarily participate and obediently follow traditional federated learning algorithms. To be more specific, our analysis reveals that agents with less typical data distributions and relatively more samples are more likely to opt out of or tamper with federated learning algorithms. To this end, we formulate the first faithful implementation problem of federated learning and design two faithful federated learning mechanisms which satisfy economic properties, scalability, and privacy. Further, the time complexity of computing all agents' payments in the number of agents is $\mathcal{O}(1)$. First, we design a Faithful Federated Learning (FFL) mechanism which approximates the Vickrey-Clarke-Groves (VCG) payments via an incremental computation. We show that it achieves (probably approximate) optimality, faithful implementation, voluntary participation, and some other economic properties (such as budget balance). Second, by partitioning agents into several subsets, we present a scalable VCG mechanism approximation. We further design a scalable and Differentially Private FFL (DP-FFL) mechanism, the first differentially private faithful mechanism, that maintains the economic properties. Our mechanism enables one to make three-way performance tradeoffs among privacy, the iterations needed, and payment accuracy loss.

中文翻译:

Faithful Edge 联合学习:可扩展性和隐私

联合学习使机器学习算法能够通过多个分散的边缘设备网络进行训练,而无需交换本地数据集。成功部署联邦学习需要确保代理(例如,移动设备)忠实地执行预期的算法,这在文献中在很大程度上被忽视了。在这项研究中,我们首先使用风险界限来分析联邦学习的关键特征、不平衡和非 iid 数据如何影响代理自愿参与和服从传统联邦学习算法的动机。更具体地说,我们的分析表明,具有较少典型数据分布和相对较多样本的代理更有可能选择退出或篡改联邦学习算法。为此,我们制定了联邦学习的第一个忠实实现问题,并设计了两个满足经济属性、可扩展性和隐私性的忠实联邦学习机制。此外,以代理数量计算所有代理支付的时间复杂度为 $\mathcal{O}(1)$。首先,我们设计了一种忠实的联邦学习 (FFL) 机制,该机制通过增量计算来近似 Vickrey-Clarke-Groves (VCG) 支付。我们表明它实现了(可能是近似的)最优性、忠实实施、自愿参与和一些其他经济特性(例如预算平衡)。其次,通过将代理划分为几个子集,我们提出了一种可扩展的 VCG 机制近似。我们进一步设计了一种可扩展的差分私有 FFL(DP-FFL)机制,第一个不同的私人忠实机制,保持经济属性。我们的机制使人们能够在隐私、所需的迭代和支付准确性损失之间进行三方面的性能权衡。
更新日期:2021-07-01
down
wechat
bug