当前位置:
X-MOL 学术
›
arXiv.eess.SP
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Knowledge-aided Federated Learning for Energy-limited Wireless Networks
arXiv - EE - Signal Processing Pub Date : 2022-09-25 , DOI: arxiv-2209.12277 Zhixiong Chen, Wenqiang Yi, Yuanwei Liu, Arumugam Nallanathan
arXiv - EE - Signal Processing Pub Date : 2022-09-25 , DOI: arxiv-2209.12277 Zhixiong Chen, Wenqiang Yi, Yuanwei Liu, Arumugam Nallanathan
The conventional model aggregation-based federated learning (FL) approaches
require all local models to have the same architecture and fail to support
practical scenarios with heterogeneous local models. Moreover, the frequent
model exchange is costly for resource-limited wireless networks since modern
deep neural networks usually have over-million parameters. To tackle these
challenges, we first propose a novel knowledge-aided FL (KFL) framework, which
aggregates light high-level data features, namely knowledge, in the per-round
learning process. This framework allows devices to design their machine
learning models independently, and the KFL also reduces the communication
overhead in the training process. We then theoretically analyze the convergence
bound of the proposed framework under a non-convex loss function setting,
revealing that large data volumes should be scheduled in the early rounds if
the total data volumes during the entire learning course are fixed. Inspired by
this, we define a new objective function, i.e., the weighted scheduled data
sample volume, to transform the inexplicit global loss minimization problem
into a tractable one for device scheduling, bandwidth allocation and power
control. To deal with the unknown time-varying wireless channels, we transform
the problem into a deterministic problem with the assistance of the Lyapunov
optimization framework. Then, we also develop an efficient online device
scheduling algorithm to achieve an energy-learning trade-off in the learning
process. Experimental results on two typical datasets (i.e., MNIST and
CIFAR-10) under highly heterogeneous local data distribution show that the
proposed KFL is capable of reducing over 99% communication overhead while
achieving better learning performance than the conventional model
aggregation-based algorithms.
中文翻译:
能量有限无线网络的知识辅助联合学习
传统的基于模型聚合的联邦学习(FL)方法要求所有局部模型具有相同的架构,并且无法支持具有异构局部模型的实际场景。此外,频繁的模型交换对于资源有限的无线网络来说代价高昂,因为现代深度神经网络通常具有超过百万个参数。为了应对这些挑战,我们首先提出了一种新的知识辅助 FL (KFL) 框架,该框架在每轮学习过程中聚合了轻量级的高级数据特征,即知识。该框架允许设备独立设计其机器学习模型,并且 KFL 还减少了训练过程中的通信开销。然后,我们从理论上分析了所提出框架在非凸损失函数设置下的收敛界限,揭示了如果整个学习过程中的总数据量是固定的,则应该在早期轮次中安排大量数据。受此启发,我们定义了一个新的目标函数,即加权调度数据样本量,将不明确的全局损失最小化问题转化为设备调度、带宽分配和功率控制的可处理问题。为了处理未知的时变无线信道,我们在 Lyapunov 优化框架的帮助下将问题转化为确定性问题。然后,我们还开发了一种高效的在线设备调度算法,以实现学习过程中的能量学习权衡。两个典型数据集的实验结果(即,
更新日期:2022-09-27
中文翻译:
能量有限无线网络的知识辅助联合学习
传统的基于模型聚合的联邦学习(FL)方法要求所有局部模型具有相同的架构,并且无法支持具有异构局部模型的实际场景。此外,频繁的模型交换对于资源有限的无线网络来说代价高昂,因为现代深度神经网络通常具有超过百万个参数。为了应对这些挑战,我们首先提出了一种新的知识辅助 FL (KFL) 框架,该框架在每轮学习过程中聚合了轻量级的高级数据特征,即知识。该框架允许设备独立设计其机器学习模型,并且 KFL 还减少了训练过程中的通信开销。然后,我们从理论上分析了所提出框架在非凸损失函数设置下的收敛界限,揭示了如果整个学习过程中的总数据量是固定的,则应该在早期轮次中安排大量数据。受此启发,我们定义了一个新的目标函数,即加权调度数据样本量,将不明确的全局损失最小化问题转化为设备调度、带宽分配和功率控制的可处理问题。为了处理未知的时变无线信道,我们在 Lyapunov 优化框架的帮助下将问题转化为确定性问题。然后,我们还开发了一种高效的在线设备调度算法,以实现学习过程中的能量学习权衡。两个典型数据集的实验结果(即,