当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robust Speaker Recognition Using Speech Enhancement And Attention Model
arXiv - CS - Computation and Language Pub Date : 2020-01-14 , DOI: arxiv-2001.05031
Yanpei Shi, Qiang Huang, Thomas Hain

In this paper, a novel architecture for speaker recognition is proposed by cascading speech enhancement and speaker processing. Its aim is to improve speaker recognition performance when speech signals are corrupted by noise. Instead of individually processing speech enhancement and speaker recognition, the two modules are integrated into one framework by a joint optimisation using deep neural networks. Furthermore, to increase robustness against noise, a multi-stage attention mechanism is employed to highlight the speaker related features learned from context information in time and frequency domain. To evaluate speaker identification and verification performance of the proposed approach, we test it on the dataset of VoxCeleb1, one of mostly used benchmark datasets. Moreover, the robustness of our proposed approach is also tested on VoxCeleb1 data when being corrupted by three types of interferences, general noise, music, and babble, at different signal-to-noise ratio (SNR) levels. The obtained results show that the proposed approach using speech enhancement and multi-stage attention models outperforms two strong baselines not using them in most acoustic conditions in our experiments.

中文翻译:

使用语音增强和注意模型的鲁棒说话人识别

在本文中,通过级联语音增强和说话人处理提出了一种新的说话人识别架构。其目的是在语音信号被噪声破坏时提高说话人识别性能。不是单独处理语音增强和说话人识别,而是通过使用深度神经网络的联合优化将两个模块集成到一个框架中。此外,为了提高对噪声的鲁棒性,采用多级注意机制来突出从时域和频域的上下文信息中学习到的说话人相关特征。为了评估所提出方法的说话人识别和验证性能,我们在 VoxCeleb1 的数据集上对其进行了测试,这是最常用的基准数据集之一。而且,我们提出的方法的稳健性也在 VoxCeleb1 数据上被测试,当被三种类型的干扰破坏时,一般噪声、音乐和胡言乱语,在不同的信噪比 (SNR) 水平。获得的结果表明,在我们的实验中,使用语音增强和多阶段注意力模型的方法优于在大多数声学条件下未使用它们的两个强基线。
更新日期:2020-05-25
down
wechat
bug