当前位置: X-MOL 学术IEEE Trans. Signal Inf. Process. Over Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Decentralized and Model-Free Federated Learning: Consensus-Based Distillation in Function Space
IEEE Transactions on Signal and Information Processing over Networks ( IF 3.0 ) Pub Date : 9-9-2022 , DOI: 10.1109/tsipn.2022.3205549
Akihito Taya 1 , Takayuki Nishio 2 , Masahiro Morikura 3 , Koji Yamamoto 3
Affiliation  

This paper proposes a fully decentralized federated learning (FL) scheme for Internet of Everything (IoE) devices that are connected via multi-hop networks. Because FL algorithms hardly converge the parameters of machine learning (ML) models, this paper focuses on the convergence of ML models in function spaces. Considering that the representative loss functions of ML tasks e.g., mean squared error (MSE) and Kullback-Leibler (KL) divergence, are convex functionals, algorithms that directly update functions in function spaces could converge to the optimal solution. The key concept of this paper is to tailor a consensus-based optimization algorithm to work in the function space and achieve the global optimum in a distributed manner. This paper first analyzes the convergence of the proposed algorithm in a function space, which is referred to as a meta-algorithm, and shows that the spectral graph theory can be applied to the function space in a manner similar to that of numerical vectors. Then, consensus-based multi-hop federated distillation (CMFD) is developed for a neural network (NN) to implement the meta-algorithm. CMFD leverages knowledge distillation to realize function aggregation among adjacent devices without parameter averaging. An advantage of CMFD is that it works even with different NN models among the distributed learners. Although CMFD does not perfectly reflect the behavior of the meta-algorithm, the discussion of the meta-algorithm's convergence property promotes an intuitive understanding of CMFD, and simulation evaluations show that NN models converge using CMFD for several tasks. The simulation results also show that CMFD achieves higher accuracy than parameter aggregation for weakly connected networks, and CMFD is more stable than parameter aggregation methods

中文翻译:


去中心化、无模型的联邦学习:函数空间中基于共识的蒸馏



本文提出了一种完全去中心化的联邦学习(FL)方案,用于通过多跳网络连接的万物互联(IoE)设备。由于FL算法很难收敛机器学习(ML)模型的参数,因此本文重点研究ML模型在函数空间中的收敛。考虑到机器学习任务的代表性损失函数,例如均方误差(MSE)和Kullback-Leibler(KL)散度,是凸函数,直接更新函数空间中的函数的算法可以收敛到最优解。本文的关键概念是定制一种基于共识的优化算法,使其在函数空间中工作并以分布式方式实现全局最优。本文首先分析了该算法在函数空间(称为元算法)中的收敛性,并表明谱图理论可以以类似于数值向量的方式应用于函数空间。然后,为神经网络(NN)开发基于共识的多跳联合蒸馏(CMFD)来实现元算法。 CMFD利用知识蒸馏实现相邻设备之间的功能聚合,无需参数平均。 CMFD 的一个优点是它甚至可以在分布式学习器中使用不同的 NN 模型。尽管 CMFD 并不能完美反映元算法的行为,但对元算法收敛特性的讨论促进了对 CMFD 的直观理解,并且仿真评估表明,NN 模型在多个任务中使用 CMFD 进行收敛。仿真结果还表明,对于弱连接网络,CMFD 比参数聚合方法取得了更高的精度,并且 CMFD 比参数聚合方法更稳定
更新日期:2024-08-26
down
wechat
bug