当前位置: X-MOL 学术arXiv.cs.CY › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Optimality and Stability in Federated Learning: A Game-theoretic Approach
arXiv - CS - Computers and Society Pub Date : 2021-06-17 , DOI: arxiv-2106.09580
Kate Donahue, Jon Kleinberg

Federated learning is a distributed learning paradigm where multiple agents, each only with access to local data, jointly learn a global model. There has recently been an explosion of research aiming not only to improve the accuracy rates of federated learning, but also provide certain guarantees around social good properties such as total error. One branch of this research has taken a game-theoretic approach, and in particular, prior work has viewed federated learning as a hedonic game, where error-minimizing players arrange themselves into federating coalitions. This past work proves the existence of stable coalition partitions, but leaves open a wide range of questions, including how far from optimal these stable solutions are. In this work, we motivate and define a notion of optimality given by the average error rates among federating agents (players). First, we provide and prove the correctness of an efficient algorithm to calculate an optimal (error minimizing) arrangement of players. Next, we analyze the relationship between the stability and optimality of an arrangement. First, we show that for some regions of parameter space, all stable arrangements are optimal (Price of Anarchy equal to 1). However, we show this is not true for all settings: there exist examples of stable arrangements with higher cost than optimal (Price of Anarchy greater than 1). Finally, we give the first constant-factor bound on the performance gap between stability and optimality, proving that the total error of the worst stable solution can be no higher than 9 times the total error of an optimal solution (Price of Anarchy bound of 9).

中文翻译:

联邦学习中的最优性和稳定性:一种博弈论方法

联邦学习是一种分布式学习范式,其中多个代理(每个代理只能访问本地数据)共同学习全局模型。最近出现了大量研究,其目的不仅是提高联邦学习的准确率,而且还围绕社会良好属性(例如总误差)提供某些保证。这项研究的一个分支采用了博弈论的方法,特别是之前的工作将联邦学习视为一种享乐游戏,其中最小化错误的参与者将自己安排成联邦联盟。过去的这项工作证明了稳定联盟分区的存在,但留下了广泛的问题,包括这些稳定解离最优解还有多远。在这项工作中,我们激发并定义了由联邦代理(玩家)之间的平均错误率给出的最优性概念。首先,我们提供并证明了计算最佳(错误最小化)玩家安排的有效算法的正确性。接下来,我们分析一个安排的稳定性和最优性之间的关系。首先,我们证明对于参数空间的某些区域,所有稳定的安排都是最优的(无政府状态的价格等于 1)。然而,我们表明这并非适用于所有设置:存在成本高于最优(无政府状态的价格大于 1)的稳定安排的例子。最后,我们给出稳定性和最优性之间性能差距的第一个常数因子界限,
更新日期:2021-06-18
down
wechat
bug