当前位置: X-MOL 学术arXiv.cs.AI › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Light Can Hack Your Face! Black-box Backdoor Attack on Face Recognition Systems
arXiv - CS - Artificial Intelligence Pub Date : 2020-09-15 , DOI: arxiv-2009.06996
Haoliang Li (1), Yufei Wang (1), Xiaofei Xie (1), Yang Liu (1), Shiqi Wang (2), Renjie Wan (1), Lap-Pui Chau (1), and Alex C. Kot (1) ((1) Nanyang Technological University, Singapore, (2) City University of Hong Kong)

Deep neural networks (DNN) have shown great success in many computer vision applications. However, they are also known to be susceptible to backdoor attacks. When conducting backdoor attacks, most of the existing approaches assume that the targeted DNN is always available, and an attacker can always inject a specific pattern to the training data to further fine-tune the DNN model. However, in practice, such attack may not be feasible as the DNN model is encrypted and only available to the secure enclave. In this paper, we propose a novel black-box backdoor attack technique on face recognition systems, which can be conducted without the knowledge of the targeted DNN model. To be specific, we propose a backdoor attack with a novel color stripe pattern trigger, which can be generated by modulating LED in a specialized waveform. We also use an evolutionary computing strategy to optimize the waveform for backdoor attack. Our backdoor attack can be conducted in a very mild condition: 1) the adversary cannot manipulate the input in an unnatural way (e.g., injecting adversarial noise); 2) the adversary cannot access the training database; 3) the adversary has no knowledge of the training model as well as the training set used by the victim party. We show that the backdoor trigger can be quite effective, where the attack success rate can be up to $88\%$ based on our simulation study and up to $40\%$ based on our physical-domain study by considering the task of face recognition and verification based on at most three-time attempts during authentication. Finally, we evaluate several state-of-the-art potential defenses towards backdoor attacks, and find that our attack can still be effective. We highlight that our study revealed a new physical backdoor attack, which calls for the attention of the security issue of the existing face recognition/verification techniques.

中文翻译:

光可以黑掉你的脸!人脸识别系统的黑盒后门攻击

深度神经网络 (DNN) 在许多计算机视觉应用中取得了巨大成功。然而,众所周知,它们也容易受到后门攻击。在进行后门攻击时,大多数现有方法都假设目标 DNN 始终可用,并且攻击者始终可以将特定模式注入训练数据以进一步微调 DNN 模型。然而,在实践中,这种攻击可能不可行,因为 DNN 模型是加密的,只能用于安全飞地。在本文中,我们提出了一种针对人脸识别系统的新型黑盒后门攻击技术,该技术可以在不了解目标 DNN 模型的情况下进行。具体而言,我们提出了一种具有新颖彩条图案触发器的后门攻击,该触发器可以通过以特定波形调制 LED 来生成。我们还使用进化计算策略来优化后门攻击的波形。我们的后门攻击可以在非常温和的条件下进行:1)对手不能以不自然的方式操纵输入(例如,注入对抗性噪声);2)对手无法访问训练数据库;3)对手不知道训练模型以及受害者方使用的训练集。我们表明后门触发器非常有效,根据我们的模拟研究,攻击成功率最高可达 $88\%$,考虑到人脸识别任务,基于我们的物理域研究,攻击成功率最高可达 $40\%$以及基于身份验证过程中最多 3 次尝试的验证。最后,我们评估了几种针对后门攻击的最先进的潜在防御措施,并发现我们的攻击仍然可以有效。我们强调,我们的研究揭示了一种新的物理后门攻击,这需要关注现有人脸识别/验证技术的安全问题。
更新日期:2020-09-16
down
wechat
bug