当前位置: X-MOL 学术Pattern Recogn. Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Cross-resolution face recognition adversarial attacks
Pattern Recognition Letters ( IF 5.1 ) Pub Date : 2020-10-15 , DOI: 10.1016/j.patrec.2020.10.008
Fabio Valerio Massoli , Fabrizio Falchi , Giuseppe Amato

Face Recognition is among the best examples of computer vision problems where the supremacy of deep learning techniques compared to standard ones is undeniable. Unfortunately, it has been shown that they are vulnerable to adversarial examples - input images to which a human imperceptible perturbation is added to lead a learning model to output a wrong prediction.

Moreover, in applications such as biometric systems and forensics, cross-resolution scenarios are easily met with a non-negligible impact on the recognition performance and adversary’s success. Despite the existence of such vulnerabilities set a harsh limit to the spread of deep learning-based face recognition systems to real-world applications, a comprehensive analysis of their behavior when threatened in a cross-resolution setting is missing in the literature.

In this context, we posit our study, where we harness several of the strongest adversarial attacks against deep learning-based face recognition systems considering the cross-resolution domain. To craft adversarial instances, we exploit attacks based on three different metrics, i.e., L1, L2, and L, and we study the resilience of the models across resolutions. We then evaluate the performance of the systems against the face identification protocol, open- and close-set.

In our study, we find that the deep representation attacks represents a much dangerous menace to a face recognition system than the ones based on the classification output independently from the used metric. Furthermore, we notice that the input image’s resolution has a non-negligible impact on an adversary’s success in deceiving a learning model. Finally, by comparing the performance of the threatened networks under analysis, we show how they can benefit from a cross-resolution training approach in terms of resilience to adversarial attacks.



中文翻译:

跨分辨率人脸识别对抗攻击

人脸识别是计算机视觉问题的最好例子之一,在这些例子中,与标准技术相比,深度学习技术的优势不可否认。不幸的是,已经表明它们容易受到对抗性示例的攻击-在输入图像上添加了人类无法察觉的扰动,从而导致学习模型输出错误的预测。

此外,在生物识别系统和取证系统等应用中,很容易遇到交叉分辨率的情况,这对识别性能和对手的成功具有不可忽略的影响。尽管存在此类漏洞,对基于深度学习的面部识别系统向实际应用的传播设置了严峻的限制,但在跨分辨率环境中受到威胁时,对其行为的全面分析却在文献中缺失。

在这种情况下,我们进行了研究,在此我们考虑了基于交叉分辨率域的针对基于深度学习的面部识别系统的几种最强对抗攻击。在制作工艺对抗的情况下,我们利用基于三个不同的指标即攻击,大号1大号2,和大号,和我们整个的分辨率研究模型的弹性。然后,我们根据人脸识别协议(开放式和封闭式)评估系统的性能。

在我们的研究中,我们发现,与基于分类输出(独立于所使用的指标)的攻击相比,深度表示攻击对面部识别系统的威胁更大。此外,我们注意到输入图像的分辨率对对手成功欺骗学习模型的影响具有不可忽略的影响。最后,通过比较所分析的受威胁网络的性能,我们展示了如何从交叉分辨率培训方法中受益于对抗攻击的弹性。

更新日期:2020-10-29
down
wechat
bug