当前位置: X-MOL 学术Future Gener. Comput. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Model compression and privacy preserving framework for federated learning
Future Generation Computer Systems ( IF 6.2 ) Pub Date : 2022-11-04 , DOI: 10.1016/j.future.2022.10.026
Xi Zhu , Junbo Wang , Wuhui Chen , Kento Sato

Federated learning (FL) as a collaborative learning paradigm has attracted extensive attention due to its characteristic of privacy preserving, in which the clients train a shared neural network model collaboratively by their local dataset and upload their model parameters merely instead of original data by wireless network in the whole training process. Because FL reduces transmission significantly, it can further meets the efficiency and security of the next generation wireless system. Although FL has reduced the size of information that needs to be transmitted, the update of model parameters still suffers from privacy leakage and communication bottleneck especially in wireless networks. To address the problem of privacy and communication, this paper proposes a model compression based FL framework. Firstly, the designed model compression framework provides effective support for efficient and secure model parameters updating in FL while keeping the personalization of all clients. Then, the proposed perturbed model compression method can further reduce the size of the model and protect the privacy of the model without sacrificing much accuracy. Besides, it also facilitates the simultaneous execution of decryption and decompressing operations by reconstruction algorithm on encrypted and compressed model parameters which is obtained by the proposed perturbed model compression method. Finally, the illustrative results demonstrate that the proposed model compression based FL framework can significantly reduce the number of model parameters for uploading with a strong privacy preservation property. For example, when the compression ratio is 0.0953 (i.e., only 9.53% of the parameters are uploaded), the accuracy of MNIST achieves 97% while the accuracy without compression is 98%.



中文翻译:

联邦学习的模型压缩和隐私保护框架

联邦学习(FL)作为一种协作学习范式因其隐私保护的特点而受到广泛关注,其中客户端通过其本地数据集协作训练共享神经网络模型并仅通过无线网络上传其模型参数而不是原始数据在整个训练过程中。由于FL显着减少了传输,因此可以进一步满足下一代无线系统的效率和安全性。尽管 FL 减少了需要传输的信息的大小,但模型参数的更新仍然存在隐私泄露和通信瓶颈,尤其是在无线网络中。为了解决隐私和通信问题,本文提出了一种基于模型压缩的 FL 框架。首先,设计的模型压缩框架为 FL 中高效安全的模型参数更新提供了有效支持,同时保持所有客户端的个性化。然后,所提出的扰动模型压缩方法可以在不牺牲太多精度的情况下进一步减小模型的大小并保护模型的隐私。此外,它还有助于通过重构算法对通过所提出的扰动模型压缩方法获得的加密和压缩模型参数同时执行解密和解压缩操作。最后,说明性结果表明,所提出的基于模型压缩的 FL 框架可以显着减少上传的模型参数数量,具有很强的隐私保护特性。例如,当压缩比为 0.0953(即

更新日期:2022-11-04
down
wechat
bug