当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
VD-GAN: A Unified Framework for Joint Prototype and Representation Learning From Contaminated Single Sample per Person
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 1-9-2021 , DOI: 10.1109/tifs.2021.3050055
Meng Pang , Binghui Wang , Yiu-ming Cheung , Yiran Chen , Bihan Wen

Single sample per person (SSPP) face recognition with a contaminated biometric enrolment database (SSPP-ce FR) is an emerging practical FR problem, where the SSPP in the enrolment database is no longer standard but contaminated by nuisance facial variations such as expression, lighting, pose, and disguise. In this case, the conventional SSPP FR methods, including the patch-based and generic learning methods, will suffer from serious performance degradation. Few recent methods were proposed to tackle SSPP-ce FR by either performing prototype learning on the contaminated enrolment database or learning discriminative representations that are robust against variation. Despite that, most of these approaches can only handle a specified single variation, e.g., pose, but cannot be extended to multiple variations. To address these two limitations, we propose a novel Variation Disentangling Generative Adversarial Network (VDGAN) to jointly perform prototype learning and representation learning in a unified framework. The proposed VD-GAN consists of an encoder-decoder structural generator and a multi-task discriminator to handle universal variations including single, multiple, and even mixed variations in practice. The generator and discriminator play an adversarial game such that the generator learns a discriminative identity representation and generates an identity-preserved prototype for each face image, while the discriminator aims to predict face identity label, distinguish real vs. fake prototype, and disentangle target variations from the learned representations. Qualitative and quantitative evaluations on various real-world face datasets containing single/multiple and mixed variations demonstrate the effectiveness of VD-GAN.

中文翻译:


VD-GAN:从每个人受污染的单个样本中进行联合原型和表示学习的统一框架



受污染的生物特征登记数据库(SSPP-ce FR)的每人单样本(SSPP)人脸识别是一个新兴的实际FR问题,其中登记数据库中的SSPP不再是标准的,而是受到令人讨厌的面部变化(例如表情、照明)的污染、姿势和伪装。在这种情况下,传统的 SSPP FR 方法,包括基于补丁的学习方法和通用学习方法,将遭受严重的性能下降。最近很少有人提出通过在受污染的注册数据库上执行原型学习或学习对变异具有鲁棒性的判别表示来解决 SSPP-ce FR 问题。尽管如此,这些方法中的大多数只能处理指定的单一变化,例如姿势,而不能扩展到多个变化。为了解决这两个限制,我们提出了一种新颖的变体解缠生成对抗网络(VDGAN),以在统一的框架中联合执行原型学习和表示学习。所提出的 VD-GAN 由编码器-解码器结构生成器和多任务鉴别器组成,用于处理实践中的通用变化,包括单个、多个甚至混合变化。生成器和鉴别器玩对抗游戏,使得生成器学习有区别的身份表示并为每个人脸图像生成身份保留的原型,而鉴别器的目标是预测人脸身份标签,区分真假原型,并解开目标变化从学习到的表示中。对包含单个/多个和混合变化的各种现实世界人脸数据集的定性和定量评估证明了 VD-GAN 的有效性。
更新日期:2024-08-22
down
wechat
bug