当前位置: X-MOL 学术Secur. Commun. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Defense Framework for Privacy Risks in Remote Machine Learning Service
Security and Communication Networks ( IF 1.968 ) Pub Date : 2021-06-18 , DOI: 10.1155/2021/9924684
Yang Bai 1, 2 , Yu Li 1 , Mingchuang Xie 1 , Mingyu Fan 1
Affiliation  

In recent years, machine learning approaches have been widely adopted for many applications, including classification. Machine learning models deal with collective sensitive data usually trained in a remote public cloud server, for instance, machine learning as a service (MLaaS) system. In this scene, users upload their local data and utilize the computation capability to train models, or users directly access models trained by MLaaS. Unfortunately, recent works reveal that the curious server (that trains the model with users’ sensitive local data and is curious to know the information about individuals) and the malicious MLaaS user (who abused to query from the MLaaS system) will cause privacy risks. The adversarial method as one of typical mitigation has been studied by several recent works. However, most of them focus on the privacy-preserving against the malicious user; in other words, they commonly consider the data owner and the model provider as one role. Under this assumption, the privacy leakage risks from the curious server are neglected. Differential privacy methods can defend against privacy threats from both the curious sever and the malicious MLaaS user by directly adding noise to the training data. Nonetheless, the differential privacy method will decrease the classification accuracy of the target model heavily. In this work, we propose a generic privacy-preserving framework based on the adversarial method to defend both the curious server and the malicious MLaaS user. The framework can adapt with several adversarial algorithms to generate adversarial examples directly with data owners’ original data. By doing so, sensitive information about the original data is hidden. Then, we explore the constraint conditions of this framework which help us to find the balance between privacy protection and the model utility. The experiments’ results show that our defense framework with the AdvGAN method is effective against MIA and our defense framework with the FGSM method can protect the sensitive data from direct content exposed attacks. In addition, our method can achieve better privacy and utility balance compared to the existing method.

中文翻译:

远程机器学习服务中隐私风险的防御框架

近年来,机器学习方法已被广泛应用于许多应用,包括分类。机器学习模型处理通常在远程公共云服务器中训练的集体敏感数据,例如机器学习即服务 (MLaaS) 系统。在这个场景中,用户上传本地数据并利用计算能力训练模型,或者用户直接访问由 MLaaS 训练的模型。不幸的是,最近的工作表明,好奇服务器(用用户敏感的本地数据训练模型,并想知道个人信息)和恶意 MLaaS 用户(滥用从 MLaaS 系统查询)会造成隐私风险。最近的几项工作已经研究了对抗性方法作为典型的缓解方法之一。然而,他们中的大多数都专注于对恶意用户的隐私保护;换句话说,他们通常将数据所有者和模型提供者视为一个角色。在这个假设下,来自好奇服务器的隐私泄露风险被忽略了。差分隐私方法可以通过直接向训练数据添加噪声来防御来自好奇服务器和恶意 MLaaS 用户的隐私威胁。尽管如此,差分隐私方法会严重降低目标模型的分类精度。在这项工作中,我们提出了一个基于对抗方法的通用隐私保护框架,以保护好奇的服务器和恶意的 MLaaS 用户。该框架可以适应多种对抗性算法,直接使用数据所有者的原始数据生成对抗性示例。通过这样做,原始数据的敏感信息被隐藏。然后,我们探索该框架的约束条件,帮助我们找到隐私保护和模型效用之间的平衡。实验结果表明,我们采用 AdvGAN 方法的防御框架对 MIA 有效,我们采用 FGSM 方法的防御框架可以保护敏感数据免受直接内容暴露攻击。此外,与现有方法相比,我们的方法可以实现更好的隐私和效用平衡。实验结果表明,我们采用 AdvGAN 方法的防御框架对 MIA 有效,我们采用 FGSM 方法的防御框架可以保护敏感数据免受直接内容暴露攻击。此外,与现有方法相比,我们的方法可以实现更好的隐私和效用平衡。实验结果表明,我们采用 AdvGAN 方法的防御框架对 MIA 有效,我们采用 FGSM 方法的防御框架可以保护敏感数据免受直接内容暴露攻击。此外,与现有方法相比,我们的方法可以实现更好的隐私和效用平衡。
更新日期:2021-06-18
down
wechat
bug