当前位置:
X-MOL 学术
›
arXiv.cs.AR
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Byzantine-Robust and Privacy-Preserving Framework for FedML
arXiv - CS - Hardware Architecture Pub Date : 2021-05-05 , DOI: arxiv-2105.02295 Hanieh Hashemi, Yongqin Wang, Chuan Guo, Murali Annavaram
arXiv - CS - Hardware Architecture Pub Date : 2021-05-05 , DOI: arxiv-2105.02295 Hanieh Hashemi, Yongqin Wang, Chuan Guo, Murali Annavaram
Federated learning has emerged as a popular paradigm for collaboratively
training a model from data distributed among a set of clients. This learning
setting presents, among others, two unique challenges: how to protect privacy
of the clients' data during training, and how to ensure integrity of the
trained model. We propose a two-pronged solution that aims to address both
challenges under a single framework. First, we propose to create secure
enclaves using a trusted execution environment (TEE) within the server. Each
client can then encrypt their gradients and send them to verifiable enclaves.
The gradients are decrypted within the enclave without the fear of privacy
breaches. However, robustness check computations in a TEE are computationally
prohibitive. Hence, in the second step, we perform a novel gradient encoding
that enables TEEs to encode the gradients and then offloading Byzantine check
computations to accelerators such as GPUs. Our proposed approach provides
theoretical bounds on information leakage and offers a significant speed-up
over the baseline in empirical evaluation.
中文翻译:
FedML的拜占庭式鲁棒和隐私保护框架
联合学习已成为一种流行的范例,可以根据一组客户之间分配的数据来协作训练模型。除其他外,此学习设置提出了两个独特的挑战:如何在培训期间保护客户数据的隐私,以及如何确保受训模型的完整性。我们提出了两管齐下的解决方案,旨在在一个单一框架下应对这两个挑战。首先,我们建议使用服务器内的受信任执行环境(TEE)创建安全的飞地。然后,每个客户端都可以加密其渐变并将其发送到可验证的飞地。梯度在飞地内解密,无需担心隐私受到破坏。但是,TEE中的鲁棒性检查计算在计算上是禁止的。因此,在第二步中 我们执行了一种新颖的梯度编码,使TEE能够对梯度进行编码,然后将拜占庭检查计算卸载到GPU等加速器。我们提出的方法为信息泄漏提供了理论上的界限,并在实证评估中大大超过了基线。
更新日期:2021-05-07
中文翻译:
FedML的拜占庭式鲁棒和隐私保护框架
联合学习已成为一种流行的范例,可以根据一组客户之间分配的数据来协作训练模型。除其他外,此学习设置提出了两个独特的挑战:如何在培训期间保护客户数据的隐私,以及如何确保受训模型的完整性。我们提出了两管齐下的解决方案,旨在在一个单一框架下应对这两个挑战。首先,我们建议使用服务器内的受信任执行环境(TEE)创建安全的飞地。然后,每个客户端都可以加密其渐变并将其发送到可验证的飞地。梯度在飞地内解密,无需担心隐私受到破坏。但是,TEE中的鲁棒性检查计算在计算上是禁止的。因此,在第二步中 我们执行了一种新颖的梯度编码,使TEE能够对梯度进行编码,然后将拜占庭检查计算卸载到GPU等加速器。我们提出的方法为信息泄漏提供了理论上的界限,并在实证评估中大大超过了基线。