当前位置: X-MOL 学术arXiv.cs.IT › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Communication Efficient Distributed Learning with Censored, Quantized, and Generalized Group ADMM
arXiv - CS - Information Theory Pub Date : 2020-09-14 , DOI: arxiv-2009.06459
Chaouki Ben Issaid, Anis Elgabli, Jihong Park, Mehdi Bennis

In this paper, we propose a communication-efficiently decentralized machine learning framework that solves a consensus optimization problem defined over a network of inter-connected workers. The proposed algorithm, Censored-and-Quantized Generalized GADMM (CQ-GGADMM), leverages the novel worker grouping and decentralized learning ideas of Group Alternating Direction Method of Multipliers (GADMM), and pushes the frontier in communication efficiency by extending its applicability to generalized network topologies, while incorporating link censoring for negligible updates after quantization. We theoretically prove that CQ-GGADMM achieves the linear convergence rate when the local objective functions are strongly convex under some mild assumptions. Numerical simulations corroborate that CQ-GGADMM exhibits higher communication efficiency in terms of the number of communication rounds and transmit energy consumption without compromising the accuracy and convergence speed, compared to the benchmark schemes based on decentralized ADMM without censoring, quantization, and/or the worker grouping method of GADMM.

中文翻译:

具有删失、量化和广义群 ADMM 的通信高效分布式学习

在本文中,我们提出了一种通信高效的分散式机器学习框架,该框架解决了在相互连接的工作人员网络上定义的共识优化问题。所提出的算法,Censored-and-Quantized Generalized GADMM (CQ-GGADMM),利用了 Group Alternating Direction Method of Multipliers (GADMM) 的新颖工人分组和分散学习思想,并通过将其适用性扩展到广义 GADMM 来推动通信效率的前沿。网络拓扑,同时在量化后结合链接审查以减少可忽略的更新。我们从理论上证明,当局部目标函数在一些温和的假设下具有强凸性时,CQ-GGADMM 实现了线性收敛速度。
更新日期:2020-09-15
down
wechat
bug