当前位置: X-MOL 学术IEEE Trans. Ind. Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Anonymous and Privacy-Preserving Federated Learning With Industrial Big Data
IEEE Transactions on Industrial Informatics ( IF 12.3 ) Pub Date : 2021-01-15 , DOI: 10.1109/tii.2021.3052183
Bin Zhao , Kai Fan , Kan Yang , Zilong Wang , Hui Li , Yintang Yang

Many artificial intelligence technologies have been applied for extracting useful information from massive industrial big data. However, the privacy issues are usually overlooked in many existing methods. In this article, we propose an anonymous and privacy-preserving federated learning scheme for the mining of industrial big data. We explored the effect of the proportion of shared parameters on the accuracy through experiments, and found that sharing partial parameters can almost achieve the accuracy of sharing all the parameters. On this basis, our proposed federated learning scheme reduces the privacy leakage by sharing fewer parameters between the server and each participant. Specifically, we leverage differential privacy on shared parameters with Gaussian mechanism to provide strict privacy preservation; the effect of different $\varepsilon$ and $\delta$ on accuracy is tested; and we keep track of $\delta$ —when it reaches a certain threshold, training shall be stopped. What's more, we employ a proxy server as the middle layer between the server and all the participants to achieve anonymity of participants; it is worth noting that this can also reduce the communication burden on the federated learning server. Finally, we provide the security analysis and performance evaluations by comparing with other schemes.

中文翻译:

工业大数据的匿名和隐私保护联合学习

许多人工智能技术已被应用于从海量工业大数据中提取有用信息。然而,在许多现有方法中,隐私问题通常被忽视。在本文中,我们提出了一种用于挖掘工业大数据的匿名且隐私保护的联邦学习方案。我们通过实验探索了共享参数比例对精度的影响,发现共享部分参数几乎可以达到共享所有参数的精度。在此基础上,我们提出的联邦学习方案通过在服务器和每个参与者之间共享更少的参数来减少隐私泄漏。具体来说,我们利用高斯机制在共享参数上利用差分隐私来提供严格的隐私保护;不同的效果$\varepsilon$$\delta$对准确性进行测试;我们跟踪$\delta$ ——当达到一定阈值时,停止训练。更重要的是,我们采用代理服务器作为服务器和所有参与者之间的中间层,以实现参与者的匿名;值得注意的是,这也可以减轻联邦学习服务器的通信负担。最后,通过与其他方案的比较,我们提供了安全性分析和性能评估。
更新日期:2021-01-15
down
wechat
bug