当前位置: X-MOL 学术arXiv.cs.NE › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robust Federated Learning by Mixture of Experts
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2021-04-23 , DOI: arxiv-2104.11700
Saeedeh Parsaeefard, Sayed Ehsan Etesami, Alberto Leon Garcia

We present a novel weighted average model based on the mixture of experts (MoE) concept to provide robustness in Federated learning (FL) against the poisoned/corrupted/outdated local models. These threats along with the non-IID nature of data sets can considerably diminish the accuracy of the FL model. Our proposed MoE-FL setup relies on the trust between users and the server where the users share a portion of their public data sets with the server. The server applies a robust aggregation method by solving the optimization problem or the Softmax method to highlight the outlier cases and to reduce their adverse effect on the FL process. Our experiments illustrate that MoE-FL outperforms the performance of the traditional aggregation approach for high rate of poisoned data from attackers.

中文翻译:

专家联合进行的强大联合学习

我们提出了一种基于专家混合(MoE)概念的新颖加权平均模型,以针对有毒/腐败/过时的本地模型提供联合学习(FL)的鲁棒性。这些威胁以及数据集的非IID性质可能会大大降低FL模型的准确性。我们提出的MoE-FL设置依赖于用户与服务器之间的信任,其中用户与服务器共享其部分公共数据集。服务器通过解决优化问题或使用Softmax方法来应用鲁棒的聚合方法,以突出显示异常情况并减少其对FL过程的不利影响。我们的实验表明,对于来自攻击者的大量中毒数据,MoE-FL的性能优于传统的聚合方法。
更新日期:2021-04-26
down
wechat
bug