当前位置: X-MOL 学术arXiv.cs.CR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Privacy-Preserving Machine Learning Training in Aggregation Scenarios
arXiv - CS - Cryptography and Security Pub Date : 2020-09-21 , DOI: arxiv-2009.09691
Liehuang Zhu, Xiangyun Tang, Meng Shen, Jie Zhang, Xiaojiang Du

To develop Smart City, the growing popularity of Machine Learning (ML) that appreciates high-quality training datasets generated from diverse IoT devices raises natural questions about the privacy guarantees that can be provided in such settings. Privacy-preserving ML training in an aggregation scenario enables a model demander to securely train ML models with the sensitive IoT data gathered from personal IoT devices. Existing solutions are generally server-aided, cannot deal with the collusion threat between the servers or between the servers and data owners, and do not match the delicate environments of IoT. We propose a privacy-preserving ML training framework named Heda that consists of a library of building blocks based on partial homomorphic encryption (PHE) enabling constructing multiple privacy-preserving ML training protocols for the aggregation scenario without the assistance of untrusted servers and defending the security under collusion situations. Rigorous security analysis demonstrates the proposed protocols can protect the privacy of each participant in the honest-but-curious model and defend the security under most collusion situations. Extensive experiments validate the efficiency of Heda which achieves the privacy-preserving ML training without losing the model accuracy.

中文翻译:

聚合场景中的隐私保护机器学习训练

为了开发智慧城市,机器学习 (ML) 越来越受欢迎,机器学习 (ML) 欣赏从各种物联网设备生成的高质量训练数据集,自然会提出有关此类设置中可以提供的隐私保证的问题。聚合场景中的隐私保护 ML 训练使模型需求者能够使用从个人物联网设备收集的敏感物联网数据安全地训练 ML 模型。现有的解决方案一般是服务器辅助的,无法应对服务器之间或服务器与数据所有者之间的串通威胁,与物联网的微妙环境不匹配。我们提出了一个名为 Heda 的隐私保护 ML 训练框架,它由一个基于部分同态加密 (PHE) 的构建块库组成,能够为聚合场景构建多个隐私保护 ML 训练协议,而无需不受信任的服务器的帮助并保护安全性串通情况下。严格的安全分析表明,所提出的协议可以保护诚实但好奇模型中每个参与者的隐私,并在大多数共谋情况下捍卫安全。大量实验验证了 Heda 的效率,它在不损失模型准确性的情况下实现了隐私保护的 ML 训练。严格的安全分析表明,所提出的协议可以保护诚实但好奇模型中每个参与者的隐私,并在大多数共谋情况下捍卫安全。大量实验验证了 Heda 的效率,它在不损失模型准确性的情况下实现了隐私保护的 ML 训练。严格的安全分析表明,所提出的协议可以保护诚实但好奇模型中每个参与者的隐私,并在大多数共谋情况下捍卫安全。大量实验验证了 Heda 的效率,它在不损失模型准确性的情况下实现了隐私保护的 ML 训练。
更新日期:2020-09-22
down
wechat
bug