当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Sample-Based and Feature-Based Federated Learning for Unconstrained and Constrained Nonconvex Optimization via Mini-batch SSCA
IEEE Transactions on Signal Processing ( IF 5.4 ) Pub Date : 2022-06-30 , DOI: 10.1109/tsp.2022.3185895
Ying Cui 1 , Yangchen Li 1 , Chencheng Ye 1
Affiliation  

Federated learning (FL) has become a hot research area in enabling the collaborative training of machine learning models among multiple clients that hold sensitive local data. Nevertheless, unconstrained federated optimization has been studied mainly using stochastic gradient descent (SGD), which may converge slowly, and constrained federated optimization, which is more challenging, has not been investigated so far. This paper investigates sample-based and feature-based federated optimization, respectively, and considers both unconstrained and constrained nonconvex problems for each of them. First, we propose FL algorithms using stochastic successive convex approximation (SSCA) and mini-batch techniques. These algorithms can adequately exploit the structures of the objective and constraint functions and incrementally utilize samples. We show that the proposed FL algorithms converge to stationary points and Karush-Kuhn-Tucker (KKT) points of the respective unconstrained and constrained nonconvex problems, respectively. Next, we provide algorithm examples with appealing computational complexity and communication load per communication round. We show that the proposed algorithm examples for unconstrained federated optimization are identical to FL algorithms via momentum SGD and provide an analytical connection between SSCA and momentum SGD. Finally, numerical experiments demonstrate the inherent advantages of the proposed algorithms in convergence speeds, communication and computation costs, and model specifications.

中文翻译:

通过小批量 SSCA 进行无约束和约束非凸优化的基于样本和基于特征的联合学习

联邦学习 (FL) 已成为一个热门研究领域,它可以在拥有敏感本地数据的多个客户端之间实现机器学习模型的协作训练。然而,无约束联邦优化主要使用随机梯度下降(SGD)进行研究,它可能收敛缓慢,而更具挑战性的约束联邦优化迄今尚未研究。本文分别研究了基于样本和基于特征的联合优化,并分别考虑了无约束和有约束的非凸问题。首先,我们提出了使用随机逐次凸逼近 (SSCA) 和小批量技术的 FL 算法。这些算法可以充分利用目标函数和约束函数的结构,并逐步利用样本。我们表明,所提出的 FL 算法分别收敛到各自的无约束和有约束非凸问题的静止点和 Karush-Kuhn-Tucker (KKT) 点。接下来,我们提供具有吸引力的计算复杂度和每轮通信的通信负载的算法示例。我们表明,所提出的无约束联合优化算法示例与通过动量 SGD 的 FL 算法相同,并提供了 SSCA 和动量 SGD 之间的分析连接。最后,数值实验证明了所提出的算法在收敛速度、通信和计算成本以及模型规范方面的固有优势。分别。接下来,我们提供具有吸引力的计算复杂度和每轮通信的通信负载的算法示例。我们表明,所提出的无约束联合优化算法示例与通过动量 SGD 的 FL 算法相同,并提供了 SSCA 和动量 SGD 之间的分析连接。最后,数值实验证明了所提出的算法在收敛速度、通信和计算成本以及模型规范方面的固有优势。分别。接下来,我们提供具有吸引力的计算复杂度和每轮通信的通信负载的算法示例。我们表明,所提出的无约束联合优化算法示例与通过动量 SGD 的 FL 算法相同,并提供了 SSCA 和动量 SGD 之间的分析连接。最后,数值实验证明了所提出的算法在收敛速度、通信和计算成本以及模型规范方面的固有优势。
更新日期:2022-06-30
down
wechat
bug