当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated Learning
arXiv - CS - Cryptography and Security Pub Date : 2020-09-23 , DOI: arxiv-2009.11248
Swanand Kadhe, Nived Rajaraman, O. Ozan Koyluoglu, Kannan Ramchandran

Recent attacks on federated learning demonstrate that keeping the training data on clients' devices does not provide sufficient privacy, as the model parameters shared by clients can leak information about their training data. A 'secure aggregation' protocol enables the server to aggregate clients' models in a privacy-preserving manner. However, existing secure aggregation protocols incur high computation/communication costs, especially when the number of model parameters is larger than the number of clients participating in an iteration -- a typical scenario in federated learning. In this paper, we propose a secure aggregation protocol, FastSecAgg, that is efficient in terms of computation and communication, and robust to client dropouts. The main building block of FastSecAgg is a novel multi-secret sharing scheme, FastShare, based on the Fast Fourier Transform (FFT), which may be of independent interest. FastShare is information-theoretically secure, and achieves a trade-off between the number of secrets, privacy threshold, and dropout tolerance. Riding on the capabilities of FastShare, we prove that FastSecAgg is (i) secure against the server colluding with 'any' subset of some constant fraction (e.g. $\sim10\%$) of the clients in the honest-but-curious setting; and (ii) tolerates dropouts of a 'random' subset of some constant fraction (e.g. $\sim10\%$) of the clients. FastSecAgg achieves significantly smaller computation cost than existing schemes while achieving the same (orderwise) communication cost. In addition, it guarantees security against adaptive adversaries, which can perform client corruptions dynamically during the execution of the protocol.

中文翻译:

FastSecAgg:用于隐私保护联邦学习的可扩展安全聚合

最近对联邦学习的攻击表明,将训练数据保存在客户端设备上并不能提供足够的隐私,因为客户端共享的模型参数可能会泄露有关其训练数据的信息。“安全聚合”协议使服务器能够以保护隐私的方式聚合客户端的模型。然而,现有的安全聚合协议会产生很高的计算/通信成本,尤其是当模型参数的数量大于参与迭代的客户端数量时——这是联邦学习中的典型场景。在本文中,我们提出了一种安全聚合协议 FastSecAgg,它在计算和通信方面高效,并且对客户端丢失具有鲁棒性。FastSecAgg 的主要构建块是一种新颖的多秘密共享方案 FastShare,基于快速傅里叶变换 (FFT),这可能是独立的兴趣。FastShare 是信息理论上安全的,并且实现了秘密数量、隐私阈值和丢失容忍度之间的权衡。借助 FastShare 的功能,我们证明了 FastSecAgg 是 (i) 安全的,可以防止服务器与诚实但好奇的设置中客户端的某个恒定分数(例如 $\sim10\%$)的“任何”子集串通;(ii) 容忍客户端某个恒定分数(例如 $\sim10\%$)的“随机”子集的丢失。FastSecAgg 在实现相同(有序)通信成本的同时,实现了比现有方案显着更小的计算成本。此外,它还保证了针对适应性对手的安全性,
更新日期:2020-09-24
down
wechat
bug