当前位置: X-MOL 学术J. Electron. Imaging › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Sparse adversarial attack based on ℓ q-norm for fooling the face anti-spoofing neural networks
Journal of Electronic Imaging ( IF 1.1 ) Pub Date : 2021-04-01 , DOI: 10.1117/1.jei.30.2.023023
Linxi Yang 1 , Jiezhi Yang 2 , Mingjie Peng 2 , Jiatian Pi 2 , Zhiyou Wu 1 , Xunyi Zhou 3 , Jueyou Li 1
Affiliation  

Neural networks are vulnerable to various adversarial perturbations added to the input. Highly sparse adversarial perturbations are difficult to identify, which is especially dangerous to network security. Previous research has shown that ℓ0-norm has good sparsity but is challenging to solve. We use ℓq-norm to approach ℓ0-norm and propose a new white-box algorithm to generate adversarial examples aiming at minimizing ℓq distance of the original image. Meanwhile, we extend the adversarial attack to facial anti-spoofing task in the field of face recognition security. This extension enables us to generate sparse and unobservable facial attack perturbation. To increase the diversity of the data set, we make a new data set of real and fake facial images containing images produced by various latest spoofing methods. Extensive experiments show that our proposed method can effectively generate a sparse perturbation and successfully mislead the classifier in multi-classification tasks and facial anti-spoofing tasks.

中文翻译:

基于ℓ稀疏对抗攻击q范数为愚弄脸防伪神经网络

神经网络容易受到输入中添加的各种对抗性干扰的影响。很难识别高度稀疏的对抗性扰动,这对网络安全性尤其危险。先前的研究表明,ℓ0范数具有良好的稀疏性,但要解决具有挑战性。我们使用ℓq范数逼近ℓ0范数,并提出了一种新的白盒算法来生成对抗性示例,旨在最小化原始图像的ℓq距离。同时,在人脸识别安全领域,我们将对抗攻击扩展到人脸反欺骗任务。此扩展使我们能够生成稀疏且不可观察的面部发作扰动。为了增加数据集的多样性,我们制作了一个包含真实的和伪造的面部图像的新数据集,其中包含通过各种最新欺骗方法生成的图像。
更新日期:2021-04-21
down
wechat
bug