当前位置: X-MOL 学术J. Netw. Comput. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Secure multiparty learning from the aggregation of locally trained models
Journal of Network and Computer Applications ( IF 7.7 ) Pub Date : 2020-06-22 , DOI: 10.1016/j.jnca.2020.102754
Xu Ma , Cunmei Ji , Xiaoyu Zhang , Jianfeng Wang , Jin Li , Kuan-Ching Li , Xiaofeng Chen

In many applications, multiple parties would benefit from a precise learning model trained from the aggregated dataset. However, the trivial method that all the data is aggregated into one datacenter and processed centrally is not appropriate when data privacy is a significant concern. In this paper, we propose a new framework for secure multi-party learning and construct a concrete scheme by incorporating aggregate signature and proxy re-encryption techniques. Unlike the previous solutions for multi-party privacy-preserving machine learning, we don't use encryption algorithm to encrypt the whole dataset or the intermediate values during the training process. In our scheme, secure verifiable computation delegation is utilized to privately label a public dataset from the aggregation of locally trained models. Using these newly generated labeled data items, the participants can update their learning models with great accuracy improvement. Further, we prove that the proposed scheme satisfies the desired security properties, and the experimental analysis on MNIST and HAM10000 shows that it is highly efficient.



中文翻译:

通过汇总本地培训的模型来确保多方学习

在许多应用中,多方将受益于从汇总数据集中训练的精确学习模型。但是,当数据保密性成为重要问题时,将所有数据聚合到一个数据中心并集中处理的简单方法是不合适的。在本文中,我们提出了一个用于安全多方学习的新框架,并通过合并聚合签名和代理重新加密技术来构建具体方案。与以前的多方隐私保护机器学习解决方案不同,我们在训练过程中不使用加密算法对整个数据集或中间值进行加密。在我们的方案中,安全的可验证计算委托用于从本地训练的模型的聚合中私有标记公共数据集。使用这些新生成的标记数据项,参与者可以极大地提高他们的学习模型的准确性。此外,我们证明了所提出的方案满足了所需的安全性,并且在MNIST和HAM10000上的实验分析表明该方案非常有效。

更新日期:2020-06-22
down
wechat
bug