当前位置: X-MOL 学术J. Syst. Archit. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fast-convergent federated learning with class-weighted aggregation
Journal of Systems Architecture ( IF 3.7 ) Pub Date : 2021-04-03 , DOI: 10.1016/j.sysarc.2021.102125
Zezhong Ma , Mengying Zhao , Xiaojun Cai , Zhiping Jia

Recently, federated learning has attracted great attention due to its advantage of enabling model training in a distributed manner. Instead of uploading data for centralized training, it allows devices to keep local data private and only send parameters to server. Then the server aggregates local models to derive a global model. In this paper, we study the aggregation problem in federated learning, especially with non-independently and identically distributed data. Since existing scheme may degrade the representative of local models after aggregation, we propose to reallocate weights of local models based on contributions to each class. Then two class-weighted aggregation strategies are developed to improve the communication efficiency in federated learning. Evaluation shows that the proposed schemes reduce 30.49% and 23.59% of communication costs compared with FedAvg.



中文翻译:

具有类加权聚合的快速收敛的联合学习

近年来,联合学习由于具有以分布式方式进行模型训练的优势而备受关注。它允许设备将本地数据保持私有状态,而仅将参数发送到服务器,而不是上传数据进行集中培训。然后,服务器聚合本地模型以导出全局模型。在本文中,我们研究了联合学习中的聚合问题,尤其是对于非独立且相同分布的数据。由于现有方案可能会降低聚合后局部模型的代表性,因此我们建议根据对每个类别的贡献来重新分配局部模型的权重。然后,开发了两种基于类的加权聚合策略,以提高联合学习中的通信效率。评估显示,所提议的方案减少了30.49%,减少了23。

更新日期:2021-04-08
down
wechat
bug