当前位置: X-MOL 学术arXiv.cs.MA › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Communication-Efficient Distributed Cooperative Learning with Compressed Beliefs
arXiv - CS - Multiagent Systems Pub Date : 2021-02-14 , DOI: arxiv-2102.07767
Mohammad Taha Toghani, Cesar A. Uribe

We study the problem of distributed cooperative learning, where a group of agents seek to agree on a set of hypotheses that best describes a sequence of private observations. In the scenario where the set of hypotheses is large, we propose a belief update rule where agents share compressed (either sparse or quantized) beliefs with an arbitrary positive compression rate. Our algorithm leverages a unified and straightforward communication rule that enables agents to access wide-ranging compression operators as black-box modules. We prove the almost sure asymptotic exponential convergence of beliefs around the set of optimal hypotheses. Additionally, we show a non-asymptotic, explicit, and linear concentration rate in probability of the beliefs on the optimal hypothesis set. We provide numerical experiments to illustrate the communication benefits of our method. The simulation results show that the number of transmitted bits can be reduced to 5-10% of the non-compressed method in the studied scenarios.

中文翻译:

具有压缩信念的高效沟通的分布式合作学习

我们研究分布式合作学习的问题,其中一组主体试图就最能描述一系列私人观察结果的一组假设达成共识。在假设集很大的情况下,我们提出了一种信念更新规则,其中代理以任意正压缩率共享压缩(稀疏或量化)的信念。我们的算法利用统一而直接的通信规则,该规则使代理可以将广泛的压缩运算符作为黑盒模块进行访问。我们证明了围绕最优假设集合的信念的几乎确定的渐近指数收敛。此外,我们在最优假设集的信念概率中显示了非渐近,显式和线性集中率。我们提供了数值实验来说明我们方法的通信优势。仿真结果表明,在所研究的场景中,传输位数可以减少到非压缩方法的5-10%。
更新日期:2021-02-17
down
wechat
bug