当前位置: X-MOL 学术Comput. Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Weighted distributed differential privacy ERM: Convex and non-convex
Computers & Security ( IF 5.6 ) Pub Date : 2021-04-18 , DOI: 10.1016/j.cose.2021.102275
Yilin Kang , Yong Liu , Ben Niu , Weiping Wang

Distributed machine learning allows different parties to learn a single model over all data sets without disclosing their own data. In this paper, we propose a weighted distributed differentially private (WD-DP) empirical risk minimization (ERM) method to train a model in distributed setting, considering different weights of different clients. For the first time, we theoretically analyze the benefits brought by weighted paradigm in distributed differentially private machine learning. Our method advances the state-of-the-art differentially private ERM methods in distributed setting. By detailed theoretical analysis, we show that in distributed setting, the noise bound and the excess empirical risk bound can be improved by considering different weights held by multiple parties. Additionally, in some situations, the constraint: strongly convexity of the loss function in ERM is not easy to achieve, so we generalize our method to the condition that the loss function is not restricted to be strongly convex but satisfies the Polyak-Łojasiewicz condition. Experiments on real data sets show that our method is more reliable and we improve the performance of distributed differentially private ERM, especially in the case that data scales on different clients are uneven. Moreover, it is an attractive result that our distributed method achieves almost the same theoretical and experimental results as previous centralized methods.



中文翻译:

加权分布式差分隐私ERM:凸和非凸

分布式机器学习允许不同的参与者在所有数据集上学习单个模型,而无需透露自己的数据。在本文中,我们提出了一种加权分布式差分私有(WD-DP)经验风险最小化(ERM)方法,在考虑不同客户权重的情况下,在分布式环境中训练模型。首次,我们从理论上分析了加权范式在分布式差分专用机器学习中带来的好处。我们的方法在分布式环境中改进了最新的差分私有ERM方法。通过详细的理论分析,我们表明,在分布式环境中,考虑多方持有的不同权重可以改善噪声界限和过多的经验风险界限。此外,在某些情况下,约束:ERM中损失函数的强凸性不容易实现,因此我们将方法推广到以下条件:损失函数不限于强凸而满足Polyak-Łojasiewicz条件。在真实数据集上进行的实验表明,我们的方法更可靠,并且可以提高分布式差分专用ERM的性能,尤其是在不同客户端上的数据规模不均匀的情况下。此外,我们的分布式方法获得了与以前的集中式方法几乎相同的理论和实验结果,这是一个诱人的结果。在真实数据集上进行的实验表明,我们的方法更可靠,并且可以提高分布式差分专用ERM的性能,尤其是在不同客户端上的数据规模不均匀的情况下。此外,我们的分布式方法获得了与以前的集中式方法几乎相同的理论和实验结果,这是一个诱人的结果。在真实数据集上进行的实验表明,我们的方法更可靠,并且可以提高分布式差分专用ERM的性能,尤其是在不同客户端上的数据规模不均匀的情况下。此外,我们的分布式方法获得了与以前的集中式方法几乎相同的理论和实验结果,这是一个诱人的结果。

更新日期:2021-05-03
down
wechat
bug