当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Decentralized Non-Convex Learning With Linearly Coupled Constraints: Algorithm Designs and Application to Vertical Learning Problem
IEEE Transactions on Signal Processing ( IF 5.4 ) Pub Date : 2022-06-27 , DOI: 10.1109/tsp.2022.3184772
Jiawei Zhang 1 , Songyang Ge 1 , Tsung-Hui Chang 2 , Zhi-Quan Luo 3
Affiliation  

Motivated by the need for decentralized learning, this paper aims at designing a distributed algorithm for solving nonconvex problems with general linear constraints over a multi-agent network. In the considered problem, each agent owns some local information and a local variable for jointly minimizing a cost function, but local variables are coupled by linear constraints. Most of the existing methods for such problems are only applicable for convex problems or problems with specific linear constraints. There still lacks a distributed algorithm for solving such problems with general linear constraints under the nonconvex setting. To tackle this problem, we propose a new algorithm, called proximal dual consensus (PDC) algorithm, which combines a proximal technique and a dual consensus method. We show that under certain conditions the proposed PDC algorithm can generate an $\epsilon$-Karush-Kuhn-Tucker solution in $\mathcal {O}(1/\epsilon)$ iterations, achieving the lower bound for distributed non-convex problems up to a constant. Numerical results are presented to demonstrate the good performance of the proposed algorithms for solving two vertical learning problems in machine learning over a multi-agent network.

中文翻译:

具有线性耦合约束的分散式非凸学习:算法设计和在垂直学习问题中的应用

受分散学习需求的启发,本文旨在设计一种分布式算法,用于解决多智能体网络上具有一般线性约束的非凸问题。在所考虑的问题中,每个代理都拥有一些局部信息和一个局部变量,用于联合最小化成本函数,但局部变量通过线性约束耦合。大多数针对此类问题的现有方法仅适用于凸问题或具有特定线性约束的问题。仍然缺乏一种分布式算法来解决在非凸设置下具有一般线性约束的此类问题。为了解决这个问题,我们提出了一种新算法,称为近端对偶共识(PDC)算法,它结合了近端技术和双重共识方法。我们表明,在某些条件下,所提出的 PDC 算法可以生成$\epsilon$-Karush-Kuhn-Tucker 解决方案$\mathcal {O}(1/\epsilon)$迭代,实现分布式非凸问题的下限,直到一个常数。数值结果展示了所提出的算法在多智能体网络上解决机器学习中的两个垂直学习问题的良好性能。
更新日期:2022-06-27
down
wechat
bug