当前位置: X-MOL 学术arXiv.cs.DC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Efficient and Less Centralized Federated Learning
arXiv - CS - Distributed, Parallel, and Cluster Computing Pub Date : 2021-06-11 , DOI: arxiv-2106.06627
Li Chou, Zichang Liu, Zhuang Wang, Anshumali Shrivastava

With the rapid growth in mobile computing, massive amounts of data and computing resources are now located at the edge. To this end, Federated learning (FL) is becoming a widely adopted distributed machine learning (ML) paradigm, which aims to harness this expanding skewed data locally in order to develop rich and informative models. In centralized FL, a collection of devices collaboratively solve a ML task under the coordination of a central server. However, existing FL frameworks make an over-simplistic assumption about network connectivity and ignore the communication bandwidth of the different links in the network. In this paper, we present and study a novel FL algorithm, in which devices mostly collaborate with other devices in a pairwise manner. Our nonparametric approach is able to exploit network topology to reduce communication bottlenecks. We evaluate our approach on various FL benchmarks and demonstrate that our method achieves 10X better communication efficiency and around 8% increase in accuracy compared to the centralized approach.

中文翻译:

高效且不那么集中的联邦学习

随着移动计算的快速增长,海量的数据和计算资源现在位于边缘。为此,联邦学习 (FL) 正在成为一种广泛采用的分布式机器学习 (ML) 范式,其目的是在本地利用这种不断扩大的倾斜数据,以开发丰富且信息丰富的模型。在集中式 FL 中,一组设备在中央服务器的协调下协同解决 ML 任务。然而,现有的 FL 框架对网络连通性做出了过于简单的假设,而忽略了网络中不同链路的通信带宽。在本文中,我们提出并研究了一种新颖的 FL 算法,其中设备主要以成对方式与其他设备协作。我们的非参数方法能够利用网络拓扑来减少通信瓶颈。我们在各种 FL 基准测试中评估了我们的方法,并证明与集中式方法相比,我们的方法实现了 10 倍的通信效率和大约 8% 的准确度提高。
更新日期:2021-06-15
down
wechat
bug