当前位置: X-MOL 学术arXiv.cs.DC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DP-NormFedAvg: Normalizing Client Updates for Privacy-Preserving Federated Learning
arXiv - CS - Distributed, Parallel, and Cluster Computing Pub Date : 2021-06-13 , DOI: arxiv-2106.07094
Rudrajit Das, Abolfazl Hashemi, Sujay Sanghavi, Inderjit S. Dhillon

In this paper, we focus on facilitating differentially private quantized communication between the clients and server in federated learning (FL). Towards this end, we propose to have the clients send a \textit{private quantized} version of only the \textit{unit vector} along the change in their local parameters to the server, \textit{completely throwing away the magnitude information}. We call this algorithm \texttt{DP-NormFedAvg} and show that it has the same order-wise convergence rate as \texttt{FedAvg} on smooth quasar-convex functions (an important class of non-convex functions for modeling optimization of deep neural networks), thereby establishing that discarding the magnitude information is not detrimental from an optimization point of view. We also introduce QTDL, a new differentially private quantization mechanism for unit-norm vectors, which we use in \texttt{DP-NormFedAvg}. QTDL employs \textit{discrete} noise having a Laplacian-like distribution on a \textit{finite support} to provide privacy. We show that under a growth-condition assumption on the per-sample client losses, the extra per-coordinate communication cost in each round incurred due to privacy by our method is $\mathcal{O}(1)$ with respect to the model dimension, which is an improvement over prior work. Finally, we show the efficacy of our proposed method with experiments on fully-connected neural networks trained on CIFAR-10 and Fashion-MNIST.

中文翻译:

DP-NormFedAvg:为保护隐私的联合学习规范化客户端更新

在本文中,我们专注于促进联邦学习 (FL) 中客户端和服务器之间的差异私有量化通信。为此,我们建议让客户端沿着其本地参数的变化向服务器发送一个 \textit{private quantized} 版本的 \textit{unit vector},\textit{完全丢弃幅度信息}。我们称这种算法为 \texttt{DP-NormFedAvg} 并表明它在平滑类星体-凸函数(一类重要的非凸函数,用于深度神经网络建模优化)上具有与 \texttt{FedAvg} 相同的顺序收敛率。网络),从而确定从优化的角度来看,丢弃幅度信息不是有害的。我们还介绍了 QTDL,一种用于单位范数向量的新的差分私有量化机制,我们在 \texttt{DP-NormFedAvg} 中使用。QTDL 采用在 \textit{finite support} 上具有类似拉普拉斯分布的 \textit{discrete} 噪声来提供隐私。我们表明,在每个样本客户损失的增长条件假设下,我们的方法因隐私而产生的每一轮额外的每个坐标通信成本是 $\mathcal{O}(1)$ 相对于模型维度,这是对先前工作的改进。最后,我们通过在 CIFAR-10 和 Fashion-MNIST 上训练的全连接神经网络的实验展示了我们提出的方法的有效性。我们的方法由于隐私而在每一轮中产生的额外每个坐标通信成本是 $\mathcal{O}(1)$ 相对于模型维度,这是对先前工作的改进。最后,我们通过在 CIFAR-10 和 Fashion-MNIST 上训练的全连接神经网络的实验展示了我们提出的方法的有效性。我们的方法由于隐私而在每一轮中产生的额外每个坐标通信成本是 $\mathcal{O}(1)$ 相对于模型维度,这是对先前工作的改进。最后,我们通过在 CIFAR-10 和 Fashion-MNIST 上训练的全连接神经网络的实验展示了我们提出的方法的有效性。
更新日期:2021-06-15
down
wechat
bug