当前位置: X-MOL 学术IEEE Trans. Cognit. Commun. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Hybrid Architecture for Federated and Centralized Learning
IEEE Transactions on Cognitive Communications and Networking ( IF 8.6 ) Pub Date : 2022-06-08 , DOI: 10.1109/tccn.2022.3181032
Ahmet M. Elbir 1 , Sinem Coleri 2 , Anastasios K. Papazafeiropoulos 3 , Pandelis Kourtessis 3 , Symeon Chatzinotas 1
Affiliation  

Many of the machine learning tasks rely on centralized learning (CL), which requires the transmission of local datasets from the clients to a parameter server (PS) entailing huge communication overhead. To overcome this, federated learning (FL) has been suggested as a promising tool, wherein the clients send only the model updates to the PS instead of the whole dataset. However, FL demands powerful computational resources from the clients. In practice, not all the clients have sufficient computational resources to participate in training. To address this common scenario, we propose a more efficient approach called hybrid federated and centralized learning (HFCL), wherein only the clients with sufficient resources employ FL, while the remaining ones send their datasets to the PS, which computes the model on behalf of them. Then, the model parameters are aggregated at the PS. To improve the efficiency of dataset transmission, we propose two different techniques: i) increased computation-per-client and ii) sequential data transmission. Notably, the HFCL frameworks outperform FL with up to 20% improvement in the learning accuracy when only half of the clients perform FL while having 50% less communication overhead than CL since all the clients collaborate on the learning process with their datasets.

中文翻译:

用于联邦和集中学习的混合架构

许多机器学习任务依赖于集中式学习(CL),这需要将本地数据集从客户端传输到参数服务器(PS),从而产生巨大的通信开销。为了克服这个问题,联邦学习 (FL) 被建议作为一种有前途的工具,其中客户端仅将模型更新发送到 PS,而不是整个数据集。然而,FL 需要来自客户端的强大计算资源。在实践中,并非所有客户端都有足够的计算资源来参与训练。为了解决这种常见情况,我们提出了一种更有效的方法,称为混合联邦和集中学习(HFCL),其中只有具有足够资源的客户端使用 FL,而其余客户端将其数据集发送到 PS,PS 代表计算模型他们。然后,模型参数在 PS 处聚合。为了提高数据集传输的效率,我们提出了两种不同的技术:i)增加每个客户端的计算和 ii)顺序数据传输。值得注意的是,当只有一半的客户端执行 FL 时,HFCL 框架的学习精度提高了 20%,而通信开销比 CL 少 50%,因为所有客户端都在学习过程中与他们的数据集协作。
更新日期:2022-06-08
down
wechat
bug