当前位置:
X-MOL 学术
›
arXiv.cs.DC
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Distributed Additive Encryption and Quantization for Privacy Preserving Federated Deep Learning
arXiv - CS - Distributed, Parallel, and Cluster Computing Pub Date : 2020-11-25 , DOI: arxiv-2011.12623 Hangyu Zhu, Rui Wang, Yaochu Jin, Kaitai Liang, Jianting Ning
arXiv - CS - Distributed, Parallel, and Cluster Computing Pub Date : 2020-11-25 , DOI: arxiv-2011.12623 Hangyu Zhu, Rui Wang, Yaochu Jin, Kaitai Liang, Jianting Ning
Homomorphic encryption is a very useful gradient protection technique used in
privacy preserving federated learning. However, existing encrypted federated
learning systems need a trusted third party to generate and distribute key
pairs to connected participants, making them unsuited for federated learning
and vulnerable to security risks. Moreover, encrypting all model parameters is
computationally intensive, especially for large machine learning models such as
deep neural networks. In order to mitigate these issues, we develop a
practical, computationally efficient encryption based protocol for federated
deep learning, where the key pairs are collaboratively generated without the
help of a third party. By quantization of the model parameters on the clients
and an approximated aggregation on the server, the proposed method avoids
encryption and decryption of the entire model. In addition, a threshold based
secret sharing technique is designed so that no one can hold the global private
key for decryption, while aggregated ciphertexts can be successfully decrypted
by a threshold number of clients even if some clients are offline. Our
experimental results confirm that the proposed method significantly reduces the
communication costs and computational complexity compared to existing encrypted
federated learning without compromising the performance and security.
中文翻译:
用于隐私保护的联合深度学习的分布式加法加密和量化
同态加密是一种非常有用的梯度保护技术,用于保护隐私的联合学习中。但是,现有的加密联合学习系统需要受信任的第三方来生成密钥对并将其分发给连接的参与者,这使它们不适合联合学习并且容易受到安全风险的影响。此外,对所有模型参数进行加密都需要大量计算,尤其是对于大型机器学习模型(例如深度神经网络)而言。为了缓解这些问题,我们为联合深度学习开发了一种实用的,计算效率高的基于加密的协议,该协议无需第三方的帮助即可协同生成密钥对。通过量化客户端上的模型参数和服务器上的近似聚合,该方法避免了整个模型的加密和解密。此外,还设计了一种基于阈值的秘密共享技术,以便没有人可以拥有用于解密的全局私钥,而即使有一些客户端处于脱机状态,也可以通过阈值数量的客户端成功解密聚合的密文。我们的实验结果证实,与现有的加密联合学习相比,该方法可显着降低通信成本和计算复杂度,而不会影响性能和安全性。
更新日期:2020-11-27
中文翻译:
用于隐私保护的联合深度学习的分布式加法加密和量化
同态加密是一种非常有用的梯度保护技术,用于保护隐私的联合学习中。但是,现有的加密联合学习系统需要受信任的第三方来生成密钥对并将其分发给连接的参与者,这使它们不适合联合学习并且容易受到安全风险的影响。此外,对所有模型参数进行加密都需要大量计算,尤其是对于大型机器学习模型(例如深度神经网络)而言。为了缓解这些问题,我们为联合深度学习开发了一种实用的,计算效率高的基于加密的协议,该协议无需第三方的帮助即可协同生成密钥对。通过量化客户端上的模型参数和服务器上的近似聚合,该方法避免了整个模型的加密和解密。此外,还设计了一种基于阈值的秘密共享技术,以便没有人可以拥有用于解密的全局私钥,而即使有一些客户端处于脱机状态,也可以通过阈值数量的客户端成功解密聚合的密文。我们的实验结果证实,与现有的加密联合学习相比,该方法可显着降低通信成本和计算复杂度,而不会影响性能和安全性。