当前位置:
X-MOL 学术
›
arXiv.cs.NI
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Joint Device Scheduling and Resource Allocation for Latency Constrained Wireless Federated Learning
arXiv - CS - Networking and Internet Architecture Pub Date : 2020-07-14 , DOI: arxiv-2007.07174 Wenqi Shi, Sheng Zhou, Zhisheng Niu, Miao Jiang, Lu Geng
arXiv - CS - Networking and Internet Architecture Pub Date : 2020-07-14 , DOI: arxiv-2007.07174 Wenqi Shi, Sheng Zhou, Zhisheng Niu, Miao Jiang, Lu Geng
In federated learning (FL), devices contribute to the global training by
uploading their local model updates via wireless channels. Due to limited
computation and communication resources, device scheduling is crucial to the
convergence rate of FL. In this paper, we propose a joint device scheduling and
resource allocation policy to maximize the model accuracy within a given total
training time budget for latency constrained wireless FL. A lower bound on the
reciprocal of the training performance loss, in terms of the number of training
rounds and the number of scheduled devices per round, is derived. Based on the
bound, the accuracy maximization problem is solved by decoupling it into two
sub-problems. First, given the scheduled devices, the optimal bandwidth
allocation suggests allocating more bandwidth to the devices with worse channel
conditions or weaker computation capabilities. Then, a greedy device scheduling
algorithm is introduced, which in each step selects the device consuming the
least updating time obtained by the optimal bandwidth allocation, until the
lower bound begins to increase, meaning that scheduling more devices will
degrade the model accuracy. Experiments show that the proposed policy
outperforms state-of-the-art scheduling policies under extensive settings of
data distributions and cell radius.
中文翻译:
延迟约束无线联合学习的联合设备调度和资源分配
在联邦学习 (FL) 中,设备通过无线通道上传其本地模型更新,从而为全局训练做出贡献。由于计算和通信资源有限,设备调度对FL的收敛速度至关重要。在本文中,我们提出了一种联合设备调度和资源分配策略,以在给定的总训练时间预算内最大限度地提高模型精度,用于延迟受限的无线 FL。就训练轮数和每轮调度设备的数量而言,得出了训练性能损失倒数的下限。基于界限,通过将其解耦为两个子问题来解决精度最大化问题。首先,给定调度的设备,最佳带宽分配建议为信道条件较差或计算能力较弱的设备分配更多带宽。然后,引入贪婪设备调度算法,在每一步中选择最优带宽分配获得的更新时间最少的设备,直到下界开始增加,这意味着调度更多的设备会降低模型的准确性。实验表明,在数据分布和小区半径的广泛设置下,所提出的策略优于最先进的调度策略。这意味着调度更多的设备会降低模型的准确性。实验表明,在数据分布和小区半径的广泛设置下,所提出的策略优于最先进的调度策略。这意味着调度更多的设备会降低模型的准确性。实验表明,在数据分布和小区半径的广泛设置下,所提出的策略优于最先进的调度策略。
更新日期:2020-07-15
中文翻译:
延迟约束无线联合学习的联合设备调度和资源分配
在联邦学习 (FL) 中,设备通过无线通道上传其本地模型更新,从而为全局训练做出贡献。由于计算和通信资源有限,设备调度对FL的收敛速度至关重要。在本文中,我们提出了一种联合设备调度和资源分配策略,以在给定的总训练时间预算内最大限度地提高模型精度,用于延迟受限的无线 FL。就训练轮数和每轮调度设备的数量而言,得出了训练性能损失倒数的下限。基于界限,通过将其解耦为两个子问题来解决精度最大化问题。首先,给定调度的设备,最佳带宽分配建议为信道条件较差或计算能力较弱的设备分配更多带宽。然后,引入贪婪设备调度算法,在每一步中选择最优带宽分配获得的更新时间最少的设备,直到下界开始增加,这意味着调度更多的设备会降低模型的准确性。实验表明,在数据分布和小区半径的广泛设置下,所提出的策略优于最先进的调度策略。这意味着调度更多的设备会降低模型的准确性。实验表明,在数据分布和小区半径的广泛设置下,所提出的策略优于最先进的调度策略。这意味着调度更多的设备会降低模型的准确性。实验表明,在数据分布和小区半径的广泛设置下,所提出的策略优于最先进的调度策略。