当前位置: X-MOL 学术arXiv.cs.AI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Oriole: Thwarting Privacy against Trustworthy Deep Learning Models
arXiv - CS - Artificial Intelligence Pub Date : 2021-02-23 , DOI: arxiv-2102.11502
Liuqiao Chen, Hu Wang, Benjamin Zi Hao Zhao, Minhui Xue, Haifeng Qian

Deep Neural Networks have achieved unprecedented success in the field of face recognition such that any individual can crawl the data of others from the Internet without their explicit permission for the purpose of training high-precision face recognition models, creating a serious violation of privacy. Recently, a well-known system named Fawkes (published in USENIX Security 2020) claimed this privacy threat can be neutralized by uploading cloaked user images instead of their original images. In this paper, we present Oriole, a system that combines the advantages of data poisoning attacks and evasion attacks, to thwart the protection offered by Fawkes, by training the attacker face recognition model with multi-cloaked images generated by Oriole. Consequently, the face recognition accuracy of the attack model is maintained and the weaknesses of Fawkes are revealed. Experimental results show that our proposed Oriole system is able to effectively interfere with the performance of the Fawkes system to achieve promising attacking results. Our ablation study highlights multiple principal factors that affect the performance of the Oriole system, including the DSSIM perturbation budget, the ratio of leaked clean user images, and the numbers of multi-cloaks for each uncloaked image. We also identify and discuss at length the vulnerabilities of Fawkes. We hope that the new methodology presented in this paper will inform the security community of a need to design more robust privacy-preserving deep learning models.

中文翻译:

金莺:反对可信赖的深度学习模型的隐私

深度神经网络在面部识别领域取得了空前的成功,因此任何人都可以在未经其明确许可的情况下从Internet抓取他人的数据,以训练高精度的面部识别模型,从而严重侵犯隐私。最近,一个名为Fawkes的著名系统(在USENIX Security 2020中发布)声称,可以通过上传隐藏的用户图像而不是原始图像来消除这种隐私威胁。在本文中,我们介绍了Oriole,该系统结合了数据中毒攻击和逃避攻击的优点,通过使用Oriole生成的多隐蔽图像训练攻击者的面部识别模型,以挫败Fawkes的保护。所以,保持了攻击模型的面部识别精度,并揭示了Fawkes的弱点。实验结果表明,我们提出的Oriole系统能够有效地干扰Fawkes系统的性能,从而获得有希望的攻击效果。我们的消融研究重点介绍了影响Oriole系统性能的多个主要因素,包括DSSIM扰动预算,泄漏的干净用户图像的比率以及每个未隐蔽图像的多重隐身数目。我们还将详细确定并讨论Fawkes的漏洞。我们希望本文介绍的新方法将使安全社区了解设计更强大的隐私保护深度学习模型的需求。实验结果表明,我们提出的Oriole系统能够有效地干扰Fawkes系统的性能,从而获得有希望的攻击效果。我们的消融研究重点介绍了影响Oriole系统性能的多个主要因素,包括DSSIM扰动预算,泄漏的干净用户图像的比率以及每个未隐蔽图像的多重隐身数目。我们还将详细确定并讨论Fawkes的漏洞。我们希望本文介绍的新方法将使安全社区了解设计更强大的隐私保护深度学习模型的需求。实验结果表明,我们提出的Oriole系统能够有效地干扰Fawkes系统的性能,从而获得有希望的攻击效果。我们的消融研究重点介绍了影响Oriole系统性能的多个主要因素,包括DSSIM扰动预算,泄漏的干净用户图像的比率以及每个未隐蔽图像的多重隐身数目。我们还将详细确定并讨论Fawkes的漏洞。我们希望本文介绍的新方法将使安全社区了解设计更强大的隐私保护深度学习模型的需求。包括DSSIM扰动预算,泄漏的干净用户图像的比率以及每个未隐藏图像的多重隐身数目。我们还将确定并详细讨论Fawkes的漏洞。我们希望本文介绍的新方法将使安全社区了解设计更强大的隐私保护深度学习模型的需求。包括DSSIM扰动预算,泄漏的干净用户图像的比率以及每个未隐藏图像的多重隐身数目。我们还将详细确定并讨论Fawkes的漏洞。我们希望本文介绍的新方法将使安全社区了解设计更强大的隐私保护深度学习模型的需求。
更新日期:2021-02-24
down
wechat
bug