当前位置: X-MOL 学术Int. J. Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Privacy-enhancing machine learning framework with private aggregation of teacher ensembles
International Journal of Intelligent Systems ( IF 5.0 ) Pub Date : 2022-09-02 , DOI: 10.1002/int.23020
Shengnan Zhao 1, 2 , Qi Zhao 3 , Chuan Zhao 4 , Han Jiang 1, 2 , Qiuliang Xu 1, 2
Affiliation  

Private aggregation of teacher ensembles (PATE), a general machine learning framework based on knowledge distillation, can provide a privacy guarantee for training data sets. However, this framework poses a number of security risks. First, PATE mainly focuses on the privacy of teachers' training data and fails to protect the privacy of their students' data. Second, PATE relies heavily on a trusted aggregator to count teachers' votes, which is not convincing enough to assume a third party would never leak teachers' votes during the knowledge transfer process. To address the abovementioned issues, we improve the original PATE framework and present a new one that combines secret sharing with Intel Software Guard Extensions in a novel way. In the proposed framework, teachers are trained locally, then uploaded and stored in two computing servers in the form of secret shares. In the knowledge transfer phase, the two computing servers receive shares of private inputs from students before collaboratively performing secure predictions. Thus neither teachers nor students expose sensitive information. During the aggregation process, we propose an effective masking technique suitable for the setting to keep the prediction results private and prevent the votes from being leaked to the aggregation server. Besides, we optimize the aggregation mechanism and add noise perturbations adaptively based on the posterior entropy of the prediction results. Finally, we evaluate the performance of the new framework on multiple data sets and experimentally demonstrate that the new framework allows highly efficient, accurate, and secure predictions.

中文翻译:

具有教师集合私有聚合的隐私增强机器学习框架

教师合奏的私人聚集(PATE),一个基于知识蒸馏的通用机器学习框架,可以为训练数据集提供隐私保障。然而,这个框架带来了许多安全风险。首先,PATE主要关注教师培训数据的隐私,未能保护学生数据的隐私。其次,PATE 严重依赖可信赖的聚合器来计算教师的选票,这不足以令人信服地假设第三方不会在知识转移过程中泄露教师的选票。为了解决上述问题,我们改进了原来的 PATE 框架,并提出了一个新的框架,它以一种新颖的方式将秘密共享与英特尔软件保护扩展相结合。在所提出的框架中,教师在本地接受培训,然后以秘密共享的形式上传并存储在两个计算服务器中。在知识转移阶段,两个计算服务器在协作执行安全预测之前从学生那里接收私人输入的份额。因此,教师和学生都不会泄露敏感信息。在聚合过程中,我们提出了一种适合设置的有效屏蔽技术,以保持预测结果的私密性并防止投票泄露到聚合服务器。此外,我们根据预测结果的后验熵优化聚合机制并自适应地添加噪声扰动。最后,我们评估了新框架在多个数据集上的性能,并通过实验证明新框架允许高效、准确和安全的预测。两个计算服务器在协作执行安全预测之前从学生那里接收私人输入的份额。因此,教师和学生都不会泄露敏感信息。在聚合过程中,我们提出了一种适合设置的有效屏蔽技术,以保持预测结果的私密性并防止投票泄露到聚合服务器。此外,我们根据预测结果的后验熵优化聚合机制并自适应地添加噪声扰动。最后,我们评估了新框架在多个数据集上的性能,并通过实验证明新框架允许高效、准确和安全的预测。两个计算服务器在协作执行安全预测之前从学生那里接收私人输入的份额。因此,教师和学生都不会泄露敏感信息。在聚合过程中,我们提出了一种适合设置的有效屏蔽技术,以保持预测结果的私密性并防止投票泄露到聚合服务器。此外,我们根据预测结果的后验熵优化聚合机制并自适应地添加噪声扰动。最后,我们评估了新框架在多个数据集上的性能,并通过实验证明新框架允许高效、准确和安全的预测。因此,教师和学生都不会泄露敏感信息。在聚合过程中,我们提出了一种适合设置的有效屏蔽技术,以保持预测结果的私密性并防止投票泄露到聚合服务器。此外,我们根据预测结果的后验熵优化聚合机制并自适应地添加噪声扰动。最后,我们评估了新框架在多个数据集上的性能,并通过实验证明新框架允许高效、准确和安全的预测。因此,教师和学生都不会泄露敏感信息。在聚合过程中,我们提出了一种适合设置的有效屏蔽技术,以保持预测结果的私密性并防止投票泄露到聚合服务器。此外,我们根据预测结果的后验熵优化聚合机制并自适应地添加噪声扰动。最后,我们评估了新框架在多个数据集上的性能,并通过实验证明新框架允许高效、准确和安全的预测。我们根据预测结果的后验熵优化聚合机制并自适应地添加噪声扰动。最后,我们评估了新框架在多个数据集上的性能,并通过实验证明新框架允许高效、准确和安全的预测。我们根据预测结果的后验熵优化聚合机制并自适应地添加噪声扰动。最后,我们评估了新框架在多个数据集上的性能,并通过实验证明新框架允许高效、准确和安全的预测。
更新日期:2022-09-02
down
wechat
bug