当前位置: X-MOL 学术J. Inf. Secur. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial attacks by attaching noise markers on the face against deep face recognition
Journal of Information Security and Applications ( IF 5.6 ) Pub Date : 2021-05-21 , DOI: 10.1016/j.jisa.2021.102874
Gwonsang Ryu , Hosung Park , Daeseon Choi

Deep neural networks (DNNs) have become increasingly effective in difficult machine learning tasks, such as image classification, speech recognition, and natural language processing. Face recognition (FR) using DNNs shows high performance and is widely used in various domains such as payment systems and immigration inspection. However, DNNs are vulnerable to adversarial examples generated by adding a small amount of noise to an original sample, resulting in misclassification by the DNNs. In this study, we attempt to deceive state-of-the-art FR by attaching noise markers on a face in the real world. To deceive an FR model in the real world, we address challenges in the attack process, including selection of locations of noise markers, the differences between colors of digital noise markers and those of noise markers after printing, the differences between the colors of noise markers that are attached to the face and those of noise markers after a picture is taken, and the differences between the locations of digital noise markers and those of noise markers that are attached to the face. In experiments, we generate noise markers considering these challenges and show that state-of-the-art FR can be deceived by attaching a maximum of 10 noise markers to a face. This can cause a security risk for FR models using DNNs.



中文翻译:

通过在面部贴上噪音标记来对抗人脸,以进行对抗性攻击

在困难的机器学习任务中,例如图像分类,语音识别和自然语言处理,深度神经网络(DNN)变得越来越有效。使用DNN的人脸识别(FR)具有很高的性能,已广泛用于支付系统和移民检查等各个领域。但是,DNN易受通过向原始样本中添加少量噪声而生成的对抗示例的破坏,从而导致DNN分类错误。在这项研究中,我们试图通过在现实世界中的面部上附加噪声标记来欺骗最新的FR。为了欺骗现实世界中的FR模型,我们应对攻击过程中的挑战,包括选择噪声标记的位置,数字噪声标记的颜色与打印后噪声标记的颜色之间的差异,拍摄后附在面部的噪波标记的颜色之间的差异与拍摄后的噪波标记的颜色之间的差异,以及数字噪波标记的位置与附着在面部的噪波标记的位置之间的差异。在实验中,考虑到这些挑战,我们生成了噪音标记,并表明可以通过将最多10个噪音标记附加到面部来欺骗最新的FR。这可能会对使用DNN的FR模型造成安全风险。我们考虑到这些挑战生成了噪声标记,并表明可以通过在面部上最多附加10个噪声标记来欺骗最新的FR。这可能会对使用DNN的FR模型造成安全风险。我们考虑到这些挑战生成了噪声标记,并表明可以通过在面部上最多附加10个噪声标记来欺骗最新的FR。这可能会对使用DNN的FR模型造成安全风险。

更新日期:2021-05-22
down
wechat
bug