当前位置: X-MOL 学术IEEE Trans. Parallel Distrib. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multi-Task Federated Learning for Personalised Deep Neural Networks in Edge Computing
IEEE Transactions on Parallel and Distributed Systems ( IF 5.3 ) Pub Date : 2021-07-21 , DOI: 10.1109/tpds.2021.3098467
Jed Mills 1 , Jia Hu 1 , Geyong Min 1
Affiliation  

Federated Learning (FL) is an emerging approach for collaboratively training Deep Neural Networks (DNNs) on mobile devices, without private user data leaving the devices. Previous works have shown that non-Independent and Identically Distributed (non-IID) user data harms the convergence speed of the FL algorithms. Furthermore, most existing work on FL measures global-model accuracy, but in many cases, such as user content-recommendation, improving individual User model Accuracy (UA) is the real objective. To address these issues, we propose a Multi-Task FL (MTFL) algorithm that introduces non-federated Batch-Normalization (BN) layers into the federated DNN. MTFL benefits UA and convergence speed by allowing users to train models personalised to their own data. MTFL is compatible with popular iterative FL optimisation algorithms such as Federated Averaging (FedAvg), and we show empirically that a distributed form of Adam optimisation (FedAvg-Adam) benefits convergence speed even further when used as the optimisation strategy within MTFL. Experiments using MNIST and CIFAR10 demonstrate that MTFL is able to significantly reduce the number of rounds required to reach a target UA, by up to $5\times$ when using existing FL optimisation strategies, and with a further $3\times$ improvement when using FedAvg-Adam. We compare MTFL to competing personalised FL algorithms, showing that it is able to achieve the best UA for MNIST and CIFAR10 in all considered scenarios. Finally, we evaluate MTFL with FedAvg-Adam on an edge-computing testbed, showing that its convergence and UA benefits outweigh its overhead.

中文翻译:

边缘计算中个性化深度神经网络的多任务联合学习

联邦学习 (FL) 是一种新兴方法,用于在移动设备上协作训练深度神经网络 (DNN),而无需私人用户数据离开设备。以前的工作表明,非独立和同分布(非 IID)用户数据会损害 FL 算法的收敛速度。此外,大多数现有的 FL 工作都测量全局模型准确性,但在许多情况下,例如用户内容推荐,提高个人用户模型准确性 (UA) 是真正的目标。为了解决这些问题,我们提出了一种多任务 FL (MTFL) 算法,该算法将非联邦批量归一化 (BN) 层引入联邦 DNN。MTFL 允许用户根据自己的数据训练个性化模型,从而提高 UA 和收敛速度。MTFL 与流行的迭代 FL 优化算法兼容,例如联邦平均 (FedAvg),并且我们凭经验表明,当用作 MTFL 中的优化策略时,分布式形式的 Adam 优化 (FedAvg-Adam) 会进一步提高收敛速度。使用 MNIST 和 CIFAR10 的实验表明,MTFL 能够显着减少达到目标 UA 所需的轮数,最多可达$5\times$ 当使用现有的 FL 优化策略时,并进一步 $3\times$使用 FedAvg-Adam 时的改进。我们将 MTFL 与竞争的个性化 FL 算法进行比较,表明它能够在所有考虑的场景中为 MNIST 和 CIFAR10 实现最佳 UA。最后,我们在边缘计算测试平台上使用 FedAvg-Adam 评估 MTFL,表明其收敛性和 UA 收益超过了其开销。
更新日期:2021-08-13
down
wechat
bug