当前位置: X-MOL 学术arXiv.cs.DC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Federated Composite Optimization
arXiv - CS - Distributed, Parallel, and Cluster Computing Pub Date : 2020-11-17 , DOI: arxiv-2011.08474
Honglin Yuan, Manzil Zaheer, Sashank Reddi

Federated Learning (FL) is a distributed learning paradigm which scales on-device learning collaboratively and privately. Standard FL algorithms such as Federated Averaging (FedAvg) are primarily geared towards smooth unconstrained settings. In this paper, we study the Federated Composite Optimization (FCO) problem, where the objective function in FL includes an additive (possibly) non-smooth component. Such optimization problems are fundamental to machine learning and arise naturally in the context of regularization (e.g., sparsity, low-rank, monotonicity, and constraint). To tackle this problem, we propose different primal/dual averaging approaches and study their communication and computation complexities. Of particular interest is Federated Dual Averaging (FedDualAvg), a federated variant of the dual averaging algorithm. FedDualAvg uses a novel double averaging procedure, which involves gradient averaging step in standard dual averaging and an average of client updates akin to standard federated averaging. Our theoretical analysis and empirical experiments demonstrate that FedDualAvg outperforms baselines for FCO.

中文翻译:

联合复合优化

联合学习 (FL) 是一种分布式学习范式,可协作和私密地扩展设备上的学习。标准 FL 算法,如联合平均 (FedAvg),主要面向平滑无约束设置。在本文中,我们研究了联合复合优化 (FCO) 问题,其中 FL 中的目标函数包括一个加性(可能)非光滑分量。此类优化问题是机器学习的基础,并且在正则化的背景下自然出现(例如,稀疏性、低秩、单调性和约束)。为了解决这个问题,我们提出了不同的原始/双重平均方法并研究它们的通信和计算复杂性。特别令人感兴趣的是联合双平均 (FedDualAvg),它是双平均算法的联合变体。FedDualAvg 使用一种新颖的双重平均程序,它涉及标准双重平均中的梯度平均步骤和类似于标准联合平均的客户端更新的平均值。我们的理论分析和实证实验表明 FedDualAvg 优于 FCO 的基准。
更新日期:2020-11-18
down
wechat
bug