当前位置: X-MOL 学术J. Inf. Secur. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Accurate and robust neural networks for face morphing attack detection
Journal of Information Security and Applications ( IF 3.8 ) Pub Date : 2020-05-14 , DOI: 10.1016/j.jisa.2020.102526
Clemens Seibold , Wojciech Samek , Anna Hilsmann , Peter Eisert

Artificial neural networks tend to use only what they need for a task. For example, to recognize a rooster, a network might only considers the rooster’s red comb and wattle and ignores the rest of the animal. This makes them vulnerable to attacks on their decision making process and can worsen their generality. Thus, this phenomenon has to be considered during the training of networks, especially in safety and security related applications. In this paper, we propose neural network training schemes, which are based on different alternations of the training data, to increase robustness and generality. Precisely, we limit the amount and position of information available to the neural network for the decision making process and study their effects on the accuracy, generality, and robustness against semantic and black box attacks for the particular example of face morphing attacks. In addition, we exploit layer-wise relevance propagation (LRP) to analyze the differences in the decision making process of the differently trained neural networks. A face morphing attack is an attack on a biometric facial recognition system, where the system is fooled to match two different individuals with the same synthetic face image. Such a synthetic image can be created by aligning and blending images of the two individuals that should be matched with this image. We train neural networks for face morphing attack detection using our proposed training schemes and show that they lead to an improvement of robustness against attacks on neural networks. Using LRP, we show that the improved training forces the networks to develop and use reliable models for all regions of the analyzed image. This redundancy in representation is of crucial importance to security related applications.



中文翻译:

准确而强大的神经网络,用于面部变形攻击检测

人工神经网络倾向于仅使用任务所需的内容。例如,要识别公鸡,网络可能只考虑公鸡的红色梳子和荆棘,而忽略了动物的其余部分。这使他们容易受到对其决策过程的攻击,并可能恶化其普遍性。因此,在网络训练期间,尤其是在与安全性和安全性相关的应用程序中,必须考虑这种现象。在本文中,我们提出了基于训练数据的不同交替的神经网络训练方案,以提高鲁棒性和通用性。精确地,我们限制了决策过程中可用于神经网络的信息的数量和位置,并研究了它们对准确性,通用性,针对脸部变形攻击的特定示例,还具有针对语义和黑匣子攻击的鲁棒性。此外,我们利用分层相关性传播(LRP)来分析训练有素的神经网络在决策过程中的差异。脸部变形攻击是对生物特征识别系统的攻击,其中该系统被欺骗以使两个不同的人具有相同的合成人脸图像。可以通过将应与该图像匹配的两个人的图像对齐并混合来创建这样的合成图像。我们使用我们提出的训练方案训练了用于脸部形态攻击检测的神经网络,并表明它们可以提高针对神经网络攻击的鲁棒性。使用LRP,我们表明,改进的训练迫使网络为分析图像的所有区域开发和使用可靠的模型。这种表示上的冗余对于与安全相关的应用至关重要。

更新日期:2020-05-14
down
wechat
bug