当前位置: X-MOL 学术arXiv.cs.DC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Stochastic Client Selection for Federated Learning with Volatile Clients
arXiv - CS - Distributed, Parallel, and Cluster Computing Pub Date : 2020-11-17 , DOI: arxiv-2011.08756
Tiansheng Huang, Weiwei Lin, Keqin Li, and Albert Y. Zomaya

Federated Learning (FL), arising as a novel secure learning paradigm, has received notable attention from the public. In each round of synchronous FL training, only a fraction of available clients are chosen to participate and the selection of which might have a direct or indirect effect on the training efficiency, as well as the final model performance. In this paper, we investigate the client selection problem under a volatile context, in which the local training of heterogeneous clients is likely to fail due to various kinds of reasons and in different levels of frequency. Intuitively, too much training failure might potentially reduce the training efficiency and therefore should be regulated through proper selection of clients. Being inspired, effective participation under a deadline-based aggregation mechanism is modeled as the objective function in our problem model, and the fairness degree, another critical factor that might influence the training performance, is covered as an expected constraint. For an efficient settlement for the proposed selection problem, we propose E3CS, a stochastic client selection scheme on the basis of an adversarial bandit solution and we further corroborate its effectiveness by conducting real data-based experiments. According to the experimental results, under a proper setting, our proposed selection scheme is able to achieve at least 20 percent and up to 50 percent of acceleration to a fixed model accuracy while maintaining the same level of final model accuracy, in comparison to the vanilla selection scheme in FL.

中文翻译:

使用易变客户端进行联合学习的随机客户端选择

联邦学习(FL)作为一种新颖的安全学习范式而出现,受到了公众的广泛关注。在每一轮同步 FL 训练中,只有一小部分可用客户端被选择参与,这些客户端的选择可能会对训练效率以及最终模型性能产生直接或间接的影响。在本文中,我们研究了不稳定环境下的客户端选择问题,其中异构客户端的本地训练可能由于各种原因和不同频率水平而失败。Intuitively, too much training failure might potentially reduce the training efficiency and therefore should be regulated through proper selection of clients. 受到启发,基于最后期限的聚合机制下的有效参与被建模为我们问题模型中的目标函数,而公平度,另一个可能影响训练性能的关键因素,被包含为预期约束。为了有效解决所提出的选择问题,我们提出了 E3CS,一种基于对抗性强盗解决方案的随机客户端选择方案,我们通过进行基于真实数据的实验进一步证实了其有效性。根据实验结果,在适当的设置下,我们提出的选择方案能够实现至少 20% 和高达 50% 的加速度到固定的模型精度,同时保持相同水平的最终模型精度,与香草相比FL 中的选择方案。公平度是另一个可能影响训练性能的关键因素,作为预期约束。为了有效解决所提出的选择问题,我们提出了 E3CS,一种基于对抗性强盗解决方案的随机客户端选择方案,我们通过进行基于真实数据的实验进一步证实了其有效性。根据实验结果,在适当的设置下,我们提出的选择方案能够实现至少 20% 和高达 50% 的加速度到固定模型精度,同时保持相同水平的最终模型精度,与香草相比FL 中的选择方案。公平度是另一个可能影响训练性能的关键因素,作为预期约束。为了有效解决所提出的选择问题,我们提出了 E3CS,一种基于对抗性强盗解决方案的随机客户端选择方案,我们通过进行基于真实数据的实验进一步证实了其有效性。根据实验结果,在适当的设置下,我们提出的选择方案能够实现至少 20% 和高达 50% 的加速度到固定模型精度,同时保持相同水平的最终模型精度,与香草相比FL 中的选择方案。为了有效解决所提出的选择问题,我们提出了 E3CS,一种基于对抗性强盗解决方案的随机客户端选择方案,我们通过进行基于真实数据的实验进一步证实了其有效性。根据实验结果,在适当的设置下,我们提出的选择方案能够实现至少 20% 和高达 50% 的加速度到固定模型精度,同时保持相同水平的最终模型精度,与香草相比FL 中的选择方案。为了有效解决所提出的选择问题,我们提出了 E3CS,一种基于对抗性强盗解决方案的随机客户端选择方案,我们通过进行基于真实数据的实验进一步证实了其有效性。根据实验结果,在适当的设置下,我们提出的选择方案能够实现至少 20% 和高达 50% 的加速度到固定模型精度,同时保持相同水平的最终模型精度,与香草相比FL 中的选择方案。
更新日期:2020-11-18
down
wechat
bug