当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Transformer Based Defense GAN Against Palm-Vein Adversarial Attacks
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 2-9-2023 , DOI: 10.1109/tifs.2023.3243782
Yantao Li 1 , Song Ruan 1 , Huafeng Qin 2 , Shaojiang Deng 1 , Mounim A. El-Yacoubi 3
Affiliation  

Vein biometrics is a high security and privacy preserving identification technology that has attracted increasing attention over the last decade. Deep neural networks (DNNs), such as convolutional neural networks (CNN), have shown strong capabilities for robust feature representation, and have achieved, as a result, state-of-the-art performance on various vision tasks. Inspired by their success, deep learning models have been widely investigated for vein recognition and have shown significant improvement of identification accuracy compared to handcrafted models. Existing deep learning models, however, are vulnerable to adversarial perturbation attacks, where thoughtfully crafted small perturbations can cause misclassification of legitimate images, degrading, thereby, the efficiency of vein recognition systems. To address this problem, we propose, in this paper, VeinGuard, a novel defense framework to defend deep learning classifiers against adversarial palm-vein image attacks, composed of a local transformer-based GAN and a purifier. VeinGuard comprises two components: a local transformer-based GAN (LTGAN) that learns the distribution of unperturbed vein images and generates high-quality palm-vein images, and a purifier consisting of a trainable residual network and of a pre-trained generator from LTGAN that automatically removes a wide variety of adversarial perturbations. The resulting clean images are fed to vein classifiers for identification, thereby avoiding adversarial attacks. We evaluate VeinGuard on three public vein datasets in terms of white-box attacks, black-box attacks, ablation experiments, and computation time. The experimental results show that VeinGuard allows filtering the perturbations and enables the classifiers to achieve state-of-the-art recognition results for different adversarial attacks.

中文翻译:


基于 Transformer 的 GAN 防御掌静脉对抗攻击



静脉生物识别技术是一种高安全性和隐私保护的识别技术,在过去十年中引起了越来越多的关注。深度神经网络(DNN),例如卷积神经网络(CNN),已经显示出强大的鲁棒特征表示能力,并因此在各种视觉任务上实现了最先进的性能。受其成功的启发,深度学习模型已被广泛研究用于静脉识别,并且与手工模型相比,识别精度显着提高。然而,现有的深度学习模型很容易受到对抗性扰动攻击,其中精心设计的小扰动可能会导致合法图像的错误分类,从而降低静脉识别系统的效率。为了解决这个问题,我们在本文中提出了 VeinGuard,这是一种新颖的防御框架,用于保护深度学习分类器免受对抗性手掌静脉图像攻击,由基于局部变压器的 GAN 和净化器组成。 VeinGuard 由两个组件组成:一个基于局部变压器的 GAN (LTGAN),用于学习未受干扰的静脉图像的分布并生成高质量的手掌静脉图像;以及一个由可训练残差网络和来自 LTGAN 的预训练生成器组成的净化器自动消除各种对抗性扰动。生成的干净图像被输入静脉分类器进行识别,从而避免对抗性攻击。我们在三个公共静脉数据集上从白盒攻击、黑盒攻击、消融实验和计算时间方面评估 VeinGuard。 实验结果表明,VeinGuard 可以过滤扰动,并使分类器能够针对不同的对抗性攻击实现最先进的识别结果。
更新日期:2024-08-26
down
wechat
bug