当前位置: X-MOL 学术Inform. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Privacy-Preserving distributed deep learning based on secret sharing
Information Sciences ( IF 8.1 ) Pub Date : 2020-03-26 , DOI: 10.1016/j.ins.2020.03.074
Jia Duan , Jiantao Zhou , Yuanman Li

Distributed deep learning (DDL) naturally provides a privacy-preserving solution to enable multiple parties to jointly learn a deep model without explicitly sharing the local datasets. However, the existing privacy-preserving DDL schemes still suffer from severe information leakage and/or lead to significant increase of the communication cost. In this work, we design a privacy-preserving DDL framework such that all the participants can keep their local datasets private with low communication and computational cost, while still maintaining the accuracy and efficiency of the learned model. By adopting an effective secret sharing strategy, we allow each participant to split the intervening parameters in the training process into shares and upload an aggregation result to the cloud server. We can theoretically show that the local dataset of a particular participant can be well protected against the honest-but-curious cloud server as well as the other participants, even under the challenging case that the cloud server colludes with some participants. Extensive experimental results are provided to validate the superiority of the proposed secret sharing based distributed deep learning (SSDDL) framework.



中文翻译:

基于秘密共享的保护隐私的分布式深度学习

分布式深度学习(DDL)自然提供了一种隐私保护解决方案,使多方可以共同学习深度模型,而无需明确共享本地数据集。但是,现有的保护隐私的DDL方案仍然遭受严重的信息泄漏和/或导致通信成本的显着增加。在这项工作中,我们设计了一个保护隐私的DDL框架,以使所有参与者都可以以较低的通信和计算成本来保持其本地数据集的私密性,同时仍保持学习模型的准确性和效率。通过采用有效的秘密共享策略,我们允许每个参与者将训练过程中的中间参数拆分为共享,并将汇总结果上传到云服务器。从理论上讲,我们可以很好地保护特定参与者的本地数据集免受诚实但好奇的云服务器以及其他参与者的侵害,即使在云服务器与某些参与者合谋的挑战性情况下也是如此。提供了广泛的实验结果,以验证所提出的基于秘密共享的分布式深度学习(SSDDL)框架的优越性。

更新日期:2020-03-26
down
wechat
bug