当前位置: X-MOL 学术EURASIP J. Info. Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Understanding visual lip-based biometric authentication for mobile devices
EURASIP Journal on Information Security ( IF 2.5 ) Pub Date : 2020-03-12 , DOI: 10.1186/s13635-020-0102-6
Carrie Wright , Darryl William Stewart

This paper explores the suitability of lip-based authentication as a behavioural biometric for mobile devices. Lip-based biometric authentication is the process of verifying an individual based on visual information taken from the lips while speaking. It is particularly suited to mobile devices because it contains unique information; its potential for liveness over existing popular biometrics such as face and fingerprint and lip movements can be captured using a device’s front-facing camera, requiring no dedicated hardware. Despite its potential, research and progress into lip-based biometric authentication has been significantly slower than other biometrics such as face, fingerprints, or iris.This paper investigates a state-of-the-art approach using a deep Siamese network, trained with the triplet loss for one-shot lip-based biometric authentication with real-world challenges. The proposed system, LipAuth, is rigourously examined with real-world data and challenges that could be expected on lip-based solution deployed on a mobile device. The work in this paper shows for the first time how a lip-based authentication system performs beyond a closed-set protocol, benchmarking a new open-set protocol with an equal error rates of 1.65% on the XM2VTS dataset.New datasets, qFace and FAVLIPS, were collected for the work in this paper, which push the field forward by enabling systematic testing of the content and quantities of data needed for lip-based biometric authentication and highlight problematic areas for future work. The FAVLIPS dataset was designed to mimic some of the hardest challenges that could be expected in a deployment scenario and include varied spoken content, miming and a wide range of challenging lighting conditions. The datasets captured for this work are available to other university research groups on request.

中文翻译:

了解移动设备基于视觉的嘴唇生物特征认证

本文探讨了基于嘴唇的身份验证作为移动设备行为生物识别技术的适用性。基于嘴唇的生物特征认证是基于说话时从嘴唇获取的视觉信息来验证个人的过程。它包含唯一信息,因此特别适合移动设备。使用设备的前置摄像头可以捕获其在现有的流行生物识别技术(如面部,指纹和嘴唇运动)上的活力潜力,而无需专用硬件。尽管具有潜力,但基于嘴唇的生物特征认证的研究和进展比其他生物特征(例如面部,指纹或虹膜)的研究速度明显慢。本文研究了使用深层暹罗网络的最新方法,经过三重失误训练,可以应对现实世界中的挑战,实现基于嘴唇的一次性生物特征认证。拟议的系统LipAuth经过严格的数据真实性检查和在移动设备上部署的基于唇的解决方案中可能遇到的挑战。本文的工作首次展示了基于嘴唇的身份验证系统在封闭协议之外的性能,在XM2VTS数据集上对具有相等错误率1.65%的新开放集协议进行了基准测试。本文收集了FAVLIPS进行这项工作,通过对基于嘴唇的生物特征认证所需的数据的内容和数量进行系统测试,从而推动了该领域的发展,并突出了未来工作中存在问题的领域。FAVLIPS数据集旨在模仿在部署场景中可能遇到的一些最困难的挑战,其中包括各种语音内容,模仿和各种具有挑战性的照明条件。可以根据要求将其他研究小组使用的这项工作的数据集。
更新日期:2020-04-16
down
wechat
bug