当前位置: X-MOL 学术Comput. Vis. Image Underst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Detection of Face Recognition Adversarial Attacks
Computer Vision and Image Understanding ( IF 4.3 ) Pub Date : 2020-09-07 , DOI: 10.1016/j.cviu.2020.103103
Fabio Valerio Massoli , Fabio Carrara , Giuseppe Amato , Fabrizio Falchi

Deep Learning methods have become state-of-the-art for solving tasks such as Face Recognition (FR). Unfortunately, despite their success, it has been pointed out that these learning models are exposed to adversarial inputs – images to which an imperceptible amount of noise for humans is added to maliciously fool a neural network – thus limiting their adoption in sensitive real-world applications. While it is true that an enormous effort has been spent to train robust models against this type of threat, adversarial detection techniques have recently started to draw attention within the scientific community. The advantage of using a detection approach is that it does not require to re-train any model; thus, it can be added to any system. In this context, we present our work on adversarial detection in forensics mainly focused on detecting attacks against FR systems in which the learning model is typically used only as features extractor. Thus, training a more robust classifier might not be enough to counteract the adversarial threats. In this frame, the contribution of our work is four-fold: (i) we test our proposed adversarial detection approach against classification attacks, i.e., adversarial samples crafted to fool an FR neural network acting as a classifier; (ii) using a k-Nearest Neighbor (k-NN) algorithm as a guide, we generate deep features attacks against an FR system based on a neural network acting as features extractor, followed by a similarity-based procedure which returns the query identity; (iii) we use the deep features attacks to fool an FR system on the 1:1 face verification task, and we show their superior effectiveness with respect to classification attacks in evading such type of system; (iv) we use the detectors trained on the classification attacks to detect the deep features attacks, thus showing that such approach is generalizable to different classes of offensives.



中文翻译:

人脸识别对抗攻击的检测

深度学习方法已成为解决诸如人脸识别(FR)等任务的最新技术。不幸的是,尽管它们取得了成功,但已指出这些学习模型容易受到对抗。输入–图像中添加了人类无法察觉的噪声,从而使神经网络恶意蒙蔽-因此限制了它们在敏感的实际应用中的采用。的确,尽管已经付出了巨大的努力来训练针对此类威胁的健壮模型,但对抗检测技术最近已开始引起科学界的关注。使用检测方法的优点是它不需要重新训练任何模型。因此,可以将其添加到任何系统。在这种情况下,我们介绍了我们在取证方面的对抗检测工作,主要致力于检测针对FR系统的攻击,在该系统中,学习模型通常仅用作特征提取器。因此,训练更强大的分类器可能不足以抵御对抗性威胁。在这个框架中 我们的工作有四个方面:(i)我们针对分类攻击测试了我们提出的对抗检测方法,即,旨在欺骗作为分类器的FR神经网络的对抗样本;(ii)以k最近邻(k-NN)算法为指导,我们基于充当特征提取器的神经网络,针对FR系统生成了深层特征攻击,随后是基于相似度的过程,该过程返回查询身份; (iii)我们使用深度特征攻击来欺骗FR系统进行1:1人脸验证任务,并且在逃避此类系统方面,我们展示了它们在分类攻击方面的优越性;(iv)我们使用经过分类攻击训练的检测器来检测深度特征攻击,从而表明这种方法可推广到不同类别的攻势。

更新日期:2020-09-11
down
wechat
bug