当前位置:
X-MOL 学术
›
arXiv.cs.IT
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Federated Learning for Physical Layer Design
arXiv - CS - Information Theory Pub Date : 2021-02-23 , DOI: arxiv-2102.11777 Ahmet M. Elbir, Anastasios K. Papazafeiropoulos, Symeon Chatzinotas
arXiv - CS - Information Theory Pub Date : 2021-02-23 , DOI: arxiv-2102.11777 Ahmet M. Elbir, Anastasios K. Papazafeiropoulos, Symeon Chatzinotas
Model-free techniques, such as machine learning (ML), have recently attracted
much interest for physical layer design, e.g., symbol detection, channel
estimation and beamforming. Most of these ML techniques employ centralized
learning (CL) schemes and assume the availability of datasets at a parameter
server (PS), demanding the transmission of data from the edge devices, such as
mobile phones, to the PS. Exploiting the data generated at the edge, federated
learning (FL) has been proposed recently as a distributed learning scheme, in
which each device computes the model parameters and sends them to the PS for
model aggregation, while the datasets are kept intact at the edge. Thus, FL is
more communication-efficient and privacy-preserving than CL and applicable to
the wireless communication scenarios, wherein the data are generated at the
edge devices. This article discusses the recent advances in FL-based training
for physical layer design problems, and identifies the related design
challenges along with possible solutions to improve the performance in terms of
communication overhead, model/data/hardware complexity.
中文翻译:
物理层设计的联合学习
诸如机器学习(ML)的无模型技术最近对物理层设计(例如符号检测,信道估计和波束成形)引起了极大的兴趣。这些大多数ML技术采用集中式学习(CL)方案,并假定参数服务器(PS)上的数据集具有可用性,要求将数据从边缘设备(例如移动电话)传输到PS。利用边缘生成的数据,最近提出了联合学习(FL)作为分布式学习方案,其中每个设备都计算模型参数并将其发送给PS以进行模型聚合,而数据集则保持完整。因此,与CL相比,FL具有更高的通信效率和隐私保护,并适用于无线通信场景,其中,数据是在边缘设备处生成的。本文讨论了基于FL的物理层设计问题培训的最新进展,并确定了相关的设计挑战以及在通信开销,模型/数据/硬件复杂性方面改善性能的可能解决方案。
更新日期:2021-02-24
中文翻译:
物理层设计的联合学习
诸如机器学习(ML)的无模型技术最近对物理层设计(例如符号检测,信道估计和波束成形)引起了极大的兴趣。这些大多数ML技术采用集中式学习(CL)方案,并假定参数服务器(PS)上的数据集具有可用性,要求将数据从边缘设备(例如移动电话)传输到PS。利用边缘生成的数据,最近提出了联合学习(FL)作为分布式学习方案,其中每个设备都计算模型参数并将其发送给PS以进行模型聚合,而数据集则保持完整。因此,与CL相比,FL具有更高的通信效率和隐私保护,并适用于无线通信场景,其中,数据是在边缘设备处生成的。本文讨论了基于FL的物理层设计问题培训的最新进展,并确定了相关的设计挑战以及在通信开销,模型/数据/硬件复杂性方面改善性能的可能解决方案。