当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Attention-Based Two-Stream Convolutional Networks for Face Spoofing Detection
IEEE Transactions on Information Forensics and Security ( IF 6.8 ) Pub Date : 2019-06-17 , DOI: 10.1109/tifs.2019.2922241
Haonan Chen , Guosheng Hu , Zhen Lei , Yaowu Chen , Neil M. Robertson , Stan Z. Li

Since the human face preserves the richest information for recognizing individuals, face recognition has been widely investigated and achieved great success in various applications in the past decades. However, face spoofing attacks (e.g., face video replay attack) remain a threat to modern face recognition systems. Though many effective methods have been proposed for anti-spoofing, we find that the performance of many existing methods is degraded by illuminations. It motivates us to develop illumination-invariant methods for anti-spoofing. In this paper, we propose a two-stream convolutional neural network (TSCNN), which works on two complementary spaces: RGB space (original imaging space) and multi-scale retinex (MSR) space (illumination-invariant space). Specifically, the RGB space contains the detailed facial textures, yet it is sensitive to illumination; MSR is invariant to illumination, yet it contains less detailed facial information. In addition, the MSR images can effectively capture the high-frequency information, which is discriminative for face spoofing detection. Images from two spaces are fed to the TSCNN to learn the discriminative features for anti-spoofing. To effectively fuse the features from two sources (RGB and MSR), we propose an attention-based fusion method, which can effectively capture the complementarity of two features. We evaluate the proposed framework on various databases, i.e., CASIA-FASD, REPLAY-ATTACK, and OULU, and achieve very competitive performance. To further verify the generalization capacity of the proposed strategies, we conduct cross-database experiments, and the results show the great effectiveness of our method.

中文翻译:

基于注意力的两流卷积网络用于面部欺骗检测

由于人脸保留了用于识别个人的最丰富的信息,因此在过去的几十年中,人脸识别已被广泛研究,并在各种应用中取得了巨大的成功。然而,面部欺骗攻击(例如,面部视频重放攻击)仍然是对现代面部识别系统的威胁。尽管已提出了许多有效的方法来进行反欺骗,但我们发现照明会降低许多现有方法的性能。它促使我们开发出不变照明的反欺骗方法。在本文中,我们提出了一种两流卷积神经网络(TSCNN),该网络在两个互补空间上工作:RGB空间(原始成像空间)和多尺度retinex(MSR)空间(照度不变空间)。具体来说,RGB空间包含详细的面部纹理,但是它对照明很敏感;MSR不变于照明,但它包含的面部信息较少。另外,MSR图像可以有效捕获高频信息,这对于面部欺骗检测是有区别的。来自两个空间的图像被馈送到TSCNN,以学习反欺骗的判别特征。为了有效融合两个来源(RGB和MSR)的特征,我们提出了一种基于注意力的融合方法,该方法可以有效地捕获两个特征的互补性。我们在各种数据库(即CASIA-FASD,REPLAY-ATTACK和OULU)上评估了提议的框架,并获得了非常有竞争力的性能。为了进一步验证所提出策略的泛化能力,我们进行了跨数据库实验,结果表明了该方法的巨大有效性。
更新日期:2020-04-22
down
wechat
bug