当前位置: X-MOL 学术Peer-to-Peer Netw. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Backdoors hidden in facial features: a novel invisible backdoor attack against face recognition systems
Peer-to-Peer Networking and Applications ( IF 3.3 ) Pub Date : 2021-01-08 , DOI: 10.1007/s12083-020-01031-z
Mingfu Xue , Can He , Jian Wang , Weiqiang Liu

Deep neural network (DNN) based face recognition system has become one of the most popular modalities for user identity authentication. However, some recent studies have indicated that, the malicious attackers can inject specific backdoors into the DNN model of a face recognition system, which is known as backdoor attack. As a result, the attacker can trigger the backdoors and impersonate someone else to log into the system, while not affecting the normal usage of the legitimate users. Existing studies use the accessories (such as purple sunglasses or bandanna) as the triggers of their backdoor attacks, which are visually conspicuous and can be easily perceptible by humans, thus result in the failure of backdoor attacks. In this paper, for the first time, we exploit the facial features as the carriers to embed the backdoors, and propose a novel backdoor attack method, named BHF2 (Backdoor Hidden in Facial Features). The BHF2 constructs the masks with the shapes of facial features (eyebrows and beard), and then injects the backdoors into the masks to ensure the visual stealthiness. Further, to make the backdoors look more natural, we propose BHF2N (Backdoor Hidden in Facial Features Naturally) method, which exploits the artificial intelligence (AI) based tool to auto-embed the natural backdoors. The generated backdoors are visually stealthy, which can guarantee the concealment of the backdoor attacks. The proposed methods (BHF2 and BHF2N) can be applied for those black-box attack scenarios, in which a malicious adversary has no knowledge of the target face recognition system. Moreover, the proposed attack methods are feasible for those strict identity authentication scenarios where the accessories are not permitted. Experimental results on two state-of-the-art face recognition models show that, the maximum success rate of the proposed attack method reaches 100% on DeepID1 and VGGFace models, while the accuracy degradation of target recognition models are as low as 0.01% (DeepID1) and 0.02% (VGGFace), respectively. Meantime, the generated backdoors can achieve visual stealthiness, where the pixel change rate of a backdoor instance relative to its clean face image is as low as 0.16%, and their structural and dHash similarity score are high up to 98.82% and 98.19%, respectively.



中文翻译:

隐藏在面部特征中的后门:针对面部识别系统的新型隐形后门攻击

基于深度神经网络(DNN)的面部识别系统已成为用户身份认证最流行的方式之一。但是,最近的一些研究表明,恶意攻击者可以将特定的后门注入人脸识别系统的DNN模型中,这被称为后门攻击。。结果,攻击者可以触发后门并冒充其他人登录系统,而不会影响合法用户的正常使用。现有研究使用附件(例如紫色太阳镜或手帕)作为其后门攻击的触发因素,这些附件在视觉上很明显,并且很容易被人察觉,从而导致后门攻击失败。在本文中,我们首次利用面部特征作为载体嵌入后门,并提出了一种新颖的后门攻击方法,称为BHF2(在面部特征中隐藏后门)。该BHF2构造具有面部特征(眉毛和胡须)形状的口罩,然后将后门注入口罩以确保视觉隐身性。此外,为了使后门看起来更自然,我们提出了BHF2N(自然隐藏在面部特征中的后门)方法,该方法利用基于人工智能(AI)的工具自动嵌入自然后门。生成的后门在视觉上是隐身的,可以确保掩盖后门攻击。建议的方法(BHF2BHF2N)可应用于那些恶意对手不了解目标人脸识别系统的黑盒攻击情形。此外,所提出的攻击方法对于不允许使用附件的严格身份认证方案是可行的。在两个最新的人脸识别模型上的实验结果表明,在DeepID1和VGGFace模型上,所提出的攻击方法的最大成功率达到100%,而目标识别模型的准确度下降仅为0.01%( DeepID1)和0.02%(VGGFace)。同时,生成的后门可以实现视觉隐身性,其中后门实例相对于其清洁面部图像的像素变化率低至0.16%,其结构和dHash相似度分数分别高达98.82%和98.19%。 。

更新日期:2021-01-08
down
wechat
bug