当前位置: X-MOL 学术Int. J. Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Secure and efficient parameters aggregation protocol for federated incremental learning and its applications
International Journal of Intelligent Systems ( IF 7 ) Pub Date : 2021-11-08 , DOI: 10.1002/int.22727
Xiaoying Wang 1 , Zhiwei Liang 2 , Arthur Sandor Voundi Koe 2 , Qingwu Wu 1 , Xiaodong Zhang 1 , Haitao Li 1 , Qintai Yang 1
Affiliation  

Federated Learning (FL) enables the deployment of distributed machine learning models over the cloud and Edge Devices (EDs) while preserving the privacy of sensitive local data, such as electronic health records. However, despite FL advantages regarding security and flexibility, current constructions still suffer from some limitations. Namely, heavy computation overhead on limited resources EDs, communication overhead in uploading converged local models' parameters to a centralized server for parameters aggregation, and lack of guaranteeing the acquired knowledge preservation in the face of incremental learning over new local data sets. This paper introduces a secure and resource-friendly protocol for parameters aggregation in federated incremental learning and its applications. In this study, the central server relies on a new method for parameters aggregation called orthogonal gradient aggregation. Such a method assumes constant changes of each local data set and allows updating parameters in the orthogonal direction of previous parameters spaces. As a result, our new construction is robust against catastrophic forgetting, maintains the federated neural network accuracy, and is efficient in computation and communication overhead. Moreover, extensive experiments analysis over several significant data sets for incremental learning demonstrates our new protocol's efficiency, efficacy, and flexibility.

中文翻译:

用于联邦增量学习的安全高效的参数聚合协议及其应用

联邦学习 (FL) 支持在云和边缘设备 (ED) 上部署分布式机器学习模型,同时保护敏感本地数据(例如电子健康记录)的隐私。然而,尽管 FL 在安全性和灵活性方面具有优势,但当前的结构仍然受到一些限制。即有限资源 ED 的计算开销大,将收敛的本地模型参数上传到集中服务器进行参数聚合的通信开销,以及面对新的本地数据集的增量学习无法保证获得的知识保存。本文介绍了一种安全且资源友好的联邦增量学习参数聚合协议及其应用。在这项研究中,中央服务器依赖于一种新的参数聚合方法,称为正交梯度聚合。这种方法假设每个局部数据集不断变化,并允许在先前参数空间的正交方向上更新参数。因此,我们的新结构对灾难性遗忘具有鲁棒性,保持联合神经网络的准确性,并且在计算和通信开销方面是有效的。此外,对增量学习的几个重要数据集的广泛实验分析证明了我们新协议的效率、功效和灵活性。我们的新结构对灾难性遗忘具有鲁棒性,保持联合神经网络的准确性,并且在计算和通信开销方面非常有效。此外,对增量学习的几个重要数据集的广泛实验分析证明了我们新协议的效率、功效和灵活性。我们的新结构对灾难性遗忘具有鲁棒性,保持联合神经网络的准确性,并且在计算和通信开销方面非常有效。此外,对增量学习的几个重要数据集的广泛实验分析证明了我们新协议的效率、功效和灵活性。
更新日期:2021-11-08
down
wechat
bug