当前位置: X-MOL 学术arXiv.cs.DC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Wireless Distributed Edge Learning: How Many Edge Devices Do We Need?
arXiv - CS - Distributed, Parallel, and Cluster Computing Pub Date : 2020-11-22 , DOI: arxiv-2011.10894
Jaeyoung Song, Marios Kountouris

We consider distributed machine learning at the wireless edge, where a parameter server builds a global model with the help of multiple wireless edge devices that perform computations on local dataset partitions. Edge devices transmit the result of their computations (updates of current global model) to the server using a fixed rate and orthogonal multiple access over an error prone wireless channel. In case of a transmission error, the undelivered packet is retransmitted until successfully decoded at the receiver. Leveraging on the fundamental tradeoff between computation and communication in distributed systems, our aim is to derive how many edge devices are needed to minimize the average completion time while guaranteeing convergence. We provide upper and lower bounds for the average completion and we find a necessary condition for adding edge devices in two asymptotic regimes, namely the large dataset and the high accuracy regime. Conducted experiments on real datasets and numerical results confirm our analysis and substantiate our claim that the number of edge devices should be carefully selected for timely distributed edge learning.

中文翻译:

无线分布式边缘学习:我们需要多少个边缘设备?

我们考虑在无线边缘进行分布式机器学习,其中参数服务器借助在本地数据集分区上执行计算的多个无线边缘设备来构建全局模型。边缘设备使用固定速率和易错无线信道上的正交多路访问将其计算结果(当前全局模型的更新)传输到服务器。在传输错误的情况下,未传输的数据包将重新传输,直到在接收器成功解码为止。利用分布式系统中计算和通信之间的基本权衡,我们的目标是得出需要多少个边缘设备以最小化平均完成时间并保证收敛。我们提供了平均完成的上限和下限,并找到了在两个渐近状态下添加边缘设备的必要条件,即大数据集和高精度状态。在真实数据集上进行的实验和数值结果证实了我们的分析,并证实了我们的主张,即应仔细选择边缘设备的数量,以便及时进行分布式边缘学习。
更新日期:2020-11-25
down
wechat
bug