当前位置: X-MOL 学术Image Vis. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Variance-guided attention-based twin deep network for cross-spectral periocular recognition
Image and Vision Computing ( IF 4.7 ) Pub Date : 2020-09-08 , DOI: 10.1016/j.imavis.2020.104016
Sushree S. Behera , Sapna S. Mishra , Bappaditya Mandal , Niladri B. Puhan

Periocular region is considered as an important biometric trait due to its ease of collectability and high acceptability in our society. Recent advancements in surveillance applications require infra-red (IR) sensing equipments to be deployed in order to capture the activities occurring in low-light conditions. This gives rise to the problem of matching periocular images in heterogeneous environments as it is difficult to avail large enrollment datasets in IR modality within a short span of time. Although a number of approaches have studied cross-spectral matching of periocular images where a probe IR image is matched against enrolled dataset images in visible (VIS) domain and vice versa, significant amount of challenges still exists for such matching. In this paper, we propose an attention-based twin deep convolutional neural network (CNN) with shared parameters in order to match the periocular images in heterogeneous modality. We introduce a novel variance-guided objective function in conjunction with the attention module in order to guide the network to focus more into the relevant regions of the periocular images. The weights of the twin model based on the new objective function are learned so as to reduce the intra-class variance and to increase the inter-class variance of the cross-spectral image pairs. Ablation studies and experimental results on three publicly available cross-spectral periocular datasets containing images from VIS, near-infrared (NIR), and night vision domains show that the proposed deep network achieves the state-of-the-art recognition performances on all three datasets.



中文翻译:

基于方差引导的基于注意力的双深度网络,用于跨光谱眼周识别

眼周区域被认为是重要的生物特征,因为它易于收藏并且在我们的社会中具有很高的接受度。监视应用程序的最新进展要求部署红外(IR)传感设备,以捕获在弱光条件下发生的活动。这引起了在异质环境中匹配眼周图像的问题,因为很难在短时间内使用IR模态中的大型注册数据集。尽管许多方法已经研究了眼周图像的跨光谱匹配,其中探头IR图像与可见(VIS)域中的已注册数据集图像进行匹配,反之亦然,但这种匹配仍然存在大量挑战。在本文中,我们提出了一个基于共享的参数的基于注意力的双深度卷积神经网络(CNN),以匹配异构模式下的眼周图像。我们结合注意力模块引入了一种新颖的方差引导目标函数,以引导网络将更多的注意力集中在眼周图像的相关区域。学习了基于新目标函数的孪生模型的权重,以减少跨光谱图像对的类内方差并增加类间方差。对包含VIS,近红外(NIR)和夜视域图像的三个可公开获得的跨光谱眼周数据集的消融研究和实验结果表明,所提出的深层网络在所有这三个方面均实现了最新的识别性能数据集。

更新日期:2020-09-22
down
wechat
bug