当前位置: X-MOL 学术Pattern Anal. Applic. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
CED-Net: context-aware ear detection network for unconstrained images
Pattern Analysis and Applications ( IF 3.7 ) Pub Date : 2020-11-09 , DOI: 10.1007/s10044-020-00914-4
Aman Kamboj , Rajneesh Rani , Aditya Nigam , Ranjeet Ranjan Jha

Personal authentication systems based on biometric have seen a strong demand mainly due to the increasing concern in various privacy and security applications. Although the use of each biometric trait is problem dependent, the human ear has been found to have enough discriminating characteristics to allow its use as a strong biometric measure. To locate an ear in a face image is a strenuous task, numerous existing approaches have achieved significant performance, but the majority of studies are based on the constrained environment. However, ear biometrics possess a great level of difficulties in the unconstrained environment, where pose, scale, occlusion, illuminations, background clutter, etc., vary to a great extent. To address the problem of ear detection in the wild, we have proposed two high-performance ear detection models: CED-Net-1 and CED-Net-2, which are fundamentally based on deep convolutional neural networks and primarily use contextual information to detect ear in the unconstrained environment. To compare the performance of proposed models, we have implemented state-of-the-art deep learning models, viz. FRCNN (faster region convolutional neural network) and SSD (single shot multibox detector) for ear detection task. To test the model’s generalization, these are evaluated on six different benchmark datasets, viz. IITD, IITK, USTB-DB3, UND-E, UND-J2 and UBEAR, and each one of the databases has different challenging images. The models are compared based on performance measure parameters such as IOU (intersection over union), accuracy, precision, recall and F1-score. It is observed that our proposed models CED-Net-1 and CED-Net-2 outperformed the FRCNN and SSD at higher values of IOUs. An accuracy of 99% is achieved at IOU 0.5 on majority of the databases. This performance signifies the importance and effectiveness of the models and indicates that the models are resilient to environmental conditions.



中文翻译:

CED-Net:上下文感知的耳朵检测网络,用于不受约束的图像

基于生物识别的个人身份验证系统已经出现了强大的需求,这主要是由于人们越来越关注各种隐私和安全应用程序。尽管每个生物特征的使用都取决于问题,但已发现人耳具有足够的区分性,可以用作强大的生物特征。要在面部图像中定位耳朵是一项艰巨的任务,许多现有方法都取得了显著成绩,但是大多数研究都基于受限的环境。然而,在不受约束的环境中,耳朵的生物识别技术存在很大的困难,在这种环境中,姿势,比例,遮挡,照明,背景杂乱等等在很大程度上有所不同。为了解决野外的耳朵检测问题,我们提出了两种高性能的耳朵检测模型:CED-Net-1和CED-Net-2基本上基于深度卷积神经网络,主要使用上下文信息在不受约束的环境中检测耳朵。为了比较建议的模型的性能,我们已实施了最新的深度学习模型,即。FRCNN(更快的区域卷积神经网络)和SSD(单发多盒检测器)用于耳朵检测任务。为了测试模型的一般性,在六个不同的基准数据集上评估了这些概括。IITD,IITK,USTB-DB3,UND-E,UND-J2和UBEAR,每个数据库都有不同的挑战性图像。基于性能度量参数(例如IOU(联合上方的交集),准确性,准确性,召回率和F1得分)对模型进行比较。可以看出,在较高的IOU值下,我们提出的模型CED-Net-1和CED-Net-2优于FRCNN和SSD。在大多数数据库上,IOU 0.5的准确性达到99%。这种性能表明了模型的重要性和有效性,并表明模型对环境条件具有适应性。

更新日期:2020-11-09
down
wechat
bug