当前位置:
X-MOL 学术
›
arXiv.cs.NI
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Relay-Assisted Cooperative Federated Learning
arXiv - CS - Networking and Internet Architecture Pub Date : 2021-07-20 , DOI: arxiv-2107.09518 Zehong Lin, Hang Liu, Ying-Jun Angela Zhang
arXiv - CS - Networking and Internet Architecture Pub Date : 2021-07-20 , DOI: arxiv-2107.09518 Zehong Lin, Hang Liu, Ying-Jun Angela Zhang
Federated learning (FL) has recently emerged as a promising technology to
enable artificial intelligence (AI) at the network edge, where distributed
mobile devices collaboratively train a shared AI model under the coordination
of an edge server. To significantly improve the communication efficiency of FL,
over-the-air computation allows a large number of mobile devices to
concurrently upload their local models by exploiting the superposition property
of wireless multi-access channels. Due to wireless channel fading, the model
aggregation error at the edge server is dominated by the weakest channel among
all devices, causing severe straggler issues. In this paper, we propose a
relay-assisted cooperative FL scheme to effectively address the straggler
issue. In particular, we deploy multiple half-duplex relays to cooperatively
assist the devices in uploading the local model updates to the edge server. The
nature of the over-the-air computation poses system objectives and constraints
that are distinct from those in traditional relay communication systems.
Moreover, the strong coupling between the design variables renders the
optimization of such a system challenging. To tackle the issue, we propose an
alternating-optimization-based algorithm to optimize the transceiver and relay
operation with low complexity. Then, we analyze the model aggregation error in
a single-relay case and show that our relay-assisted scheme achieves a smaller
error than the one without relays provided that the relay transmit power and
the relay channel gains are sufficiently large. The analysis provides critical
insights on relay deployment in the implementation of cooperative FL. Extensive
numerical results show that our design achieves faster convergence compared
with state-of-the-art schemes.
中文翻译:
中继辅助合作联邦学习
联邦学习 (FL) 最近已成为一种在网络边缘启用人工智能 (AI) 的有前途的技术,其中分布式移动设备在边缘服务器的协调下协同训练共享的 AI 模型。为了显着提高 FL 的通信效率,空中计算通过利用无线多址信道的叠加特性,允许大量移动设备同时上传其本地模型。由于无线信道衰落,边缘服务器的模型聚合误差由所有设备中最弱的信道主导,导致严重的落后问题。在本文中,我们提出了一种中继辅助的协作 FL 方案来有效解决掉队问题。特别是,我们部署了多个半双工中继来协同协助设备将本地模型更新上传到边缘服务器。空中计算的性质提出了与传统中继通信系统不同的系统目标和约束。此外,设计变量之间的强耦合使得此类系统的优化具有挑战性。为了解决这个问题,我们提出了一种基于交替优化的算法,以低复杂度优化收发器和中继操作。然后,我们分析了单中继情况下的模型聚合误差,并表明我们的中继辅助方案比没有中继的方案实现了更小的误差,前提是中继发射功率和中继信道增益足够大。该分析为合作 FL 实施中的中继部署提供了重要见解。大量的数值结果表明,与最先进的方案相比,我们的设计实现了更快的收敛。
更新日期:2021-07-21
中文翻译:
中继辅助合作联邦学习
联邦学习 (FL) 最近已成为一种在网络边缘启用人工智能 (AI) 的有前途的技术,其中分布式移动设备在边缘服务器的协调下协同训练共享的 AI 模型。为了显着提高 FL 的通信效率,空中计算通过利用无线多址信道的叠加特性,允许大量移动设备同时上传其本地模型。由于无线信道衰落,边缘服务器的模型聚合误差由所有设备中最弱的信道主导,导致严重的落后问题。在本文中,我们提出了一种中继辅助的协作 FL 方案来有效解决掉队问题。特别是,我们部署了多个半双工中继来协同协助设备将本地模型更新上传到边缘服务器。空中计算的性质提出了与传统中继通信系统不同的系统目标和约束。此外,设计变量之间的强耦合使得此类系统的优化具有挑战性。为了解决这个问题,我们提出了一种基于交替优化的算法,以低复杂度优化收发器和中继操作。然后,我们分析了单中继情况下的模型聚合误差,并表明我们的中继辅助方案比没有中继的方案实现了更小的误差,前提是中继发射功率和中继信道增益足够大。该分析为合作 FL 实施中的中继部署提供了重要见解。大量的数值结果表明,与最先进的方案相比,我们的设计实现了更快的收敛。