当前位置: X-MOL 学术arXiv.cs.SD › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
When Automatic Voice Disguise Meets Automatic Speaker Verification
arXiv - CS - Sound Pub Date : 2020-09-15 , DOI: arxiv-2009.06863
Linlin Zheng, Jiakang Li, Meng Sun, Xiongwei Zhang, Thomas Fang Zheng

The technique of transforming voices in order to hide the real identity of a speaker is called voice disguise, among which automatic voice disguise (AVD) by modifying the spectral and temporal characteristics of voices with miscellaneous algorithms are easily conducted with softwares accessible to the public. AVD has posed great threat to both human listening and automatic speaker verification (ASV). In this paper, we have found that ASV is not only a victim of AVD but could be a tool to beat some simple types of AVD. Firstly, three types of AVD, pitch scaling, vocal tract length normalization (VTLN) and voice conversion (VC), are introduced as representative methods. State-of-the-art ASV methods are subsequently utilized to objectively evaluate the impact of AVD on ASV by equal error rates (EER). Moreover, an approach to restore disguised voice to its original version is proposed by minimizing a function of ASV scores w.r.t. restoration parameters. Experiments are then conducted on disguised voices from Voxceleb, a dataset recorded in real-world noisy scenario. The results have shown that, for the voice disguise by pitch scaling, the proposed approach obtains an EER around 7% comparing to the 30% EER of a recently proposed baseline using the ratio of fundamental frequencies. The proposed approach generalizes well to restore the disguise with nonlinear frequency warping in VTLN by reducing its EER from 34.3% to 18.5%. However, it is difficult to restore the source speakers in VC by our approach, where more complex forms of restoration functions or other paralinguistic cues might be necessary to restore the nonlinear transform in VC. Finally, contrastive visualization on ASV features with and without restoration illustrate the role of the proposed approach in an intuitive way.

中文翻译:

当自动语音伪装遇到自动扬声器验证

为隐藏说话者的真实身份而转换声音的技术称为语音伪装,其中通过使用各种算法修改语音的频谱和时间特征的自动语音伪装(AVD)可以很容易地通过公众可以访问的软件进行。AVD 对人类听力和自动说话人验证 (ASV) 都构成了巨大威胁。在本文中,我们发现 ASV 不仅是 AVD 的受害者,而且可以成为击败某些简单类型 AVD 的工具。首先,介绍了三种类型的 AVD,音高缩放、声道长度归一化 (VTLN) 和语音转换 (VC) 作为代表性方法。随后使用最先进的 ASV 方法通过等错误率 (EER) 客观评估 AVD 对 ASV 的影响。而且,通过最小化 ASV 分数和恢复参数的函数,提出了一种将伪装语音恢复到其原始版本的方法。然后对来自 Voxceleb 的伪装声音进行实验,Voxceleb 是一个记录在真实世界嘈杂场景中的数据集。结果表明,对于通过音高缩放的语音伪装,与最近提出的使用基频比的基线的 30% EER 相比,所提出的方法获得了大约 7% 的 EER。所提出的方法通过将 VTLN 的 EER 从 34.3% 降低到 18.5%,很好地概括了 VTLN 中非线性频率扭曲的伪装。然而,通过我们的方法很难恢复 VC 中的源说话者,其中可能需要更复杂形式的恢复函数或其他副语言线索来恢复 VC 中的非线性变换。最后,
更新日期:2020-09-16
down
wechat
bug