当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Communication-Efficient Randomized Algorithm for Multi-Kernel Online Federated Learning.
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 23.6 ) Pub Date : 2022-11-07 , DOI: 10.1109/tpami.2021.3129809
Songnam Hong 1 , Jeongmin Chae 2
Affiliation  

Online federated learning (OFL) is a promising framework to learn a sequence of global functions from distributed sequential data at local devices. In this framework, we first introduce a single kernel-based OFL (termed S-KOFL) by incorporating random-feature (RF) approximation, online gradient descent (OGD), and federated averaging (FedAvg). As manifested in the centralized counterpart, an extension to multi-kernel method is necessary. Harnessing the extension principle in the centralized method, we construct a vanilla multi-kernel algorithm (termed vM-KOFL) and prove its asymptotic optimality. However, it is not practical as the communication overhead grows linearly with the size of a kernel dictionary. Moreover, this problem cannot be addressed via the existing communication-efficient techniques (e.g., quantization and sparsification) in the conventional federated learning. Our major contribution is to propose a novel randomized algorithm (named eM-KOFL), which exhibits similar performance to vM-KOFL while maintaining low communication cost. We theoretically prove that eM-KOFL achieves an optimal sublinear regret bound. Mimicking the key concept of eM-KOFL in an efficient way, we propose a more practical pM-KOFL having the same communication overhead as S-KOFL. Via numerical tests with real datasets, we demonstrate that pM-KOFL yields the almost same performance as vM-KOFL (or eM-KOFL) on various online learning tasks.

中文翻译:

用于多核在线联合学习的高效通信随机算法。

在线联邦学习 (OFL) 是一个很有前途的框架,可以从本地设备的分布式顺序数据中学习一系列全局函数。在此框架中,我们首先通过结合随机特征 (RF) 近似、在线梯度下降 (OGD) 和联合平均 (FedAvg) 来引入基于单个内核的 OFL(称为 S-KOFL)。正如在集中对应物中所表现的那样,有必要对多内核方法进行扩展。利用集中方法中的扩展原理,我们构建了一个普通的多核算法(称为 vM-KOFL)并证明了其渐近最优性。然而,这是不切实际的,因为通信开销随着内核字典的大小线性增长。此外,这个问题无法通过现有的高效通信技术(例如,量化和稀疏化)在传统的联邦学习中。我们的主要贡献是提出了一种新颖的随机算法(名为 eM-KOFL),该算法在保持低通信成本的同时表现出与 vM-KOFL 相似的性能。我们从理论上证明 eM-KOFL 实现了最佳的次线性遗憾边界。以有效的方式模仿 eM-KOFL 的关键概念,我们提出了一种更实用的 pM-KOFL,它具有与 S-KOFL 相同的通信开销。通过对真实数据集的数值测试,我们证明 pM-KOFL 在各种在线学习任务上产生与 vM-KOFL(或 eM-KOFL)几乎相同的性能。我们从理论上证明 eM-KOFL 实现了最佳的次线性遗憾边界。以有效的方式模仿 eM-KOFL 的关键概念,我们提出了一种更实用的 pM-KOFL,它具有与 S-KOFL 相同的通信开销。通过对真实数据集的数值测试,我们证明 pM-KOFL 在各种在线学习任务上产生与 vM-KOFL(或 eM-KOFL)几乎相同的性能。我们从理论上证明 eM-KOFL 实现了最佳的次线性遗憾边界。以有效的方式模仿 eM-KOFL 的关键概念,我们提出了一种更实用的 pM-KOFL,它具有与 S-KOFL 相同的通信开销。通过对真实数据集的数值测试,我们证明 pM-KOFL 在各种在线学习任务上产生与 vM-KOFL(或 eM-KOFL)几乎相同的性能。
更新日期:2021-11-23
down
wechat
bug