当前位置: X-MOL 学术IEEE Internet Things J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Continuous Articulatory-Gesture-Based Liveness Detection for Voice Authentication on Smart Devices
IEEE Internet of Things Journal ( IF 10.6 ) Pub Date : 2022-08-18 , DOI: 10.1109/jiot.2022.3199995
Linghan Zhang 1 , Sheng Tan 2 , Yingying Chen 3 , Jie Yang 4
Affiliation  

Voice biometrics is drawing increasing attention to user authentication on smart devices. However, voice biometrics is vulnerable to replay attacks, where adversaries try to spoof voice authentication systems using prerecorded voice samples collected from genuine users. To this end, we propose VoiceGesture, a liveness detection solution for voice authentication on smart devices, such as smartphones and smart speakers. With audio hardware advances on smart devices, VoiceGesture leverages built-in speaker and microphone pairs on smart devices as Doppler radar to sense articulatory gestures for liveness detection during voice authentication. The experiments with 21 participants and different smart devices show that VoiceGesture achieves over 99% and around 98% detection accuracy for text-dependent and text-independent liveness detection, respectively. Moreover, VoiceGesture is robust to different device placements, low audio sampling frequency, and supports medium-range liveness detection on smart speakers in various use scenarios, including smart homes and smart vehicles.

中文翻译:

基于连续发音手势的智能设备语音认证活体检测

语音生物识别技术越来越关注智能设备上的用户身份验证。然而,语音生物识别技术很容易受到重放攻击,在这种情况下,对手会尝试使用从真实用户那里收集的预先录制的语音样本来欺骗语音认证系统。为此,我们提出了 VoiceGesture,这是一种用于智能设备(如智能手机和智能扬声器)语音认证的活体检测解决方案。随着智能设备上音频硬件的进步,VoiceGesture 利用智能设备上的内置扬声器和麦克风对作为多普勒雷达来感应发音手势,以便在语音验证过程中进行活动检测。对 21 名参与者和不同智能设备的实验表明,VoiceGesture 分别实现了超过 99% 和大约 98% 的文本相关和文本无关活体检测的检测准确率。
更新日期:2022-08-18
down
wechat
bug