当前位置: X-MOL 学术IEEE Trans. Wirel. Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Federated Learning via Over-the-Air Computation
IEEE Transactions on Wireless Communications ( IF 8.9 ) Pub Date : 2020-03-01 , DOI: 10.1109/twc.2019.2961673
Kai Yang , Tao Jiang , Yuanming Shi , Zhi Ding

The stringent requirements for low-latency and privacy of the emerging high-stake applications with intelligent devices such as drones and smart vehicles make the cloud computing inapplicable in these scenarios. Instead, edge machine learning becomes increasingly attractive for performing training and inference directly at network edges without sending data to a centralized data center. This stimulates a nascent field termed as federated learning for training a machine learning model on computation, storage, energy and bandwidth limited mobile devices in a distributed manner. To preserve data privacy and address the issues of unbalanced and non-IID data points across different devices, the federated averaging algorithm has been proposed for global model aggregation by computing the weighted average of locally updated model at each selected device. However, the limited communication bandwidth becomes the main bottleneck for aggregating the locally computed updates. We thus propose a novel over-the-air computation based approach for fast global model aggregation via exploring the superposition property of a wireless multiple-access channel. This is achieved by joint device selection and beamforming design, which is modeled as a sparse and low-rank optimization problem to support efficient algorithms design. To achieve this goal, we provide a difference-of-convex-functions (DC) representation for the sparse and low-rank function to enhance sparsity and accurately detect the fixed-rank constraint in the procedure of device selection. A DC algorithm is further developed to solve the resulting DC program with global convergence guarantees. The algorithmic advantages and admirable performance of the proposed methodologies are demonstrated through extensive numerical results.

中文翻译:

通过空中计算的联邦学习

无人机和智能汽车等新兴高风险应用对低延迟和隐私的严格要求使得云计算在这些场景中不适用。相反,边缘机器学习对于直接在网络边缘执行训练和推理而无需将数据发送到集中式数据中心变得越来越有吸引力。这激发了称为联邦学习的新兴领域,用于以分布式方式在计算、存储、能源和带宽受限的移动设备上训练机器学习模型。为了保护数据隐私并解决跨不同设备的不平衡和非 IID 数据点的问题,通过计算每个选定设备上本地更新模型的加权平均值,提出了联合平均算法用于全局模型聚合。然而,有限的通信带宽成为聚合本地计算更新的主要瓶颈。因此,我们通过探索无线多址信道的叠加特性,提出了一种基于空中计算的新型快速全局模型聚合方法。这是通过联合设备选择和波束成形设计来实现的,该设计被建模为一个稀疏和低秩的优化问题,以支持高效的算法设计。为了实现这一目标,我们为稀疏和低秩函数提供了凸函数差(DC)表示,以增强稀疏性并准确检测设备选择过程中的固定秩约束。进一步开发了 DC 算法,以解决具有全局收敛保证的结果 DC 程序。
更新日期:2020-03-01
down
wechat
bug