当前位置: X-MOL 学术IEEE Commun. Mag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Toward Node Liability in Federated Learning: Computational Cost and Network Overhead
IEEE Communications Magazine ( IF 8.3 ) Pub Date : 2021-10-11 , DOI: 10.1109/mcom.011.2100231
Francesco Malandrino , Carla Fabiana Chiasserini

Many machine learning (ML) techniques suffer from the drawback that their output (e.g., a classification decision) is not clearly and intuitively connected to their input (e.g., an image). To cope with this issue, several explainable ML techniques have been proposed to, for example, identify which pixels of an input image had the strongest influence on its classification. However, in distributed scenarios, it is often more important to connect decisions with the information used for the model training and the nodes supplying such information. To this end, in this article we focus on federated learning and present a new methodology, named node liability in federated learning (NL-FL), which permits identifying the source of the training information that most contributed to a given decision. After discussing NL-FL's cost in terms of extra computation, storage, and network latency, we demonstrate its usefulness in an edge-based scenario. We find that NL-FL is able to swiftly identify misbehaving nodes and to exclude them from the training process, thereby improving learning accuracy.

中文翻译:


联邦学习中的节点责任:计算成本和网络开销



许多机器学习(ML)技术都存在这样的缺点:它们的输出(例如,分类决策)不能清晰直观地与其输入(例如,图像)相关联。为了解决这个问题,人们提出了几种可解释的机器学习技术,例如识别输入图像的哪些像素对其分类影响最大。然而,在分布式场景中,将决策与用于模型训练的信息以及提供此类信息的节点联系起来通常更为重要。为此,在本文中,我们重点关注联邦学习,并提出一种新的方法,称为联邦学习中的节点责任(NL-FL),它允许识别对给定决策贡献最大的训练信息的来源。在讨论了 NL-FL 在额外计算、存储和网络延迟方面的成本后,我们展示了它在基于边缘的场景中的有用性。我们发现 NL-FL 能够快速识别行为不当的节点并将其从训练过程中排除,从而提高学习准确性。
更新日期:2021-10-11
down
wechat
bug