当前位置: X-MOL 学术arXiv.cs.DC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
QuPeL: Quantized Personalization with Applications to Federated Learning
arXiv - CS - Distributed, Parallel, and Cluster Computing Pub Date : 2021-02-23 , DOI: arxiv-2102.11786
Kaan Ozkara, Navjot Singh, Deepesh Data, Suhas Diggavi

Traditionally, federated learning (FL) aims to train a single global model while collaboratively using multiple clients and a server. Two natural challenges that FL algorithms face are heterogeneity in data across clients and collaboration of clients with {\em diverse resources}. In this work, we introduce a \textit{quantized} and \textit{personalized} FL algorithm QuPeL that facilitates collective training with heterogeneous clients while respecting resource diversity. For personalization, we allow clients to learn \textit{compressed personalized models} with different quantization parameters depending on their resources. Towards this, first we propose an algorithm for learning quantized models through a relaxed optimization problem, where quantization values are also optimized over. When each client participating in the (federated) learning process has different requirements of the quantized model (both in value and precision), we formulate a quantized personalization framework by introducing a penalty term for local client objectives against a globally trained model to encourage collaboration. We develop an alternating proximal gradient update for solving this quantized personalization problem, and we analyze its convergence properties. Numerically, we show that optimizing over the quantization levels increases the performance and we validate that QuPeL outperforms both FedAvg and local training of clients in a heterogeneous setting.

中文翻译:

QuPeL:量化个性化及其在联合学习中的应用

传统上,联合学习(FL)旨在在协作使用多个客户端和服务器的同时训练单个全局模型。FL算法面临的两个自然挑战是客户端之间数据的异构性和与{\ em多样化资源}的客户端协作。在这项工作中,我们介绍了\ textit {quantized}和\ textit {personalized} FL算法QuPeL,该算法在尊重资源多样性的同时,促进了异构客户端的集体培训。对于个性化,我们允许客户根据他们的资源使用不同的量化参数来学习\ textit {压缩的个性化模型}。为此,首先我们提出了一种通过松弛优化问题学习量化模型的算法,其中量化值也得到了优化。当每个参加(联合)学习过程的客户对量化模型有不同的要求(价值和精确度)时,我们通过引入针对本地客户目标的惩罚条款和全球培训模型以鼓励合作来制定量化个性化框架。我们开发了一种交替的近端梯度更新来解决此量化的个性化问题,并分析其收敛特性。从数值上看,我们表明在量化级别上进行优化可以提高性能,并验证QuPeL在异构环境中的性能优于FedAvg和客户本地培训。我们通过引入针对本地客户目标的惩罚条款和全球受过训练的模型来鼓励合作,从而制定了量化的个性化框架。我们开发了一种交替的近端梯度更新来解决此量化的个性化问题,并分析其收敛特性。从数值上看,我们表明在量化级别上进行优化可以提高性能,并且可以证明QuPeL在异构环境中的性能优于FedAvg和客户本地培训。我们通过引入针对本地客户目标的惩罚条款和针对全球培训模型来鼓励合作来制定量化的个性化框架。我们开发了一种交替的近端梯度更新来解决此量化的个性化问题,并分析其收敛特性。从数值上看,我们表明在量化级别上进行优化可以提高性能,并且可以证明QuPeL在异构环境中的性能优于FedAvg和客户本地培训。
更新日期:2021-02-24
down
wechat
bug