当前位置: X-MOL 学术arXiv.cs.SD › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Exploring Disentanglement with Multilingual and Monolingual VQ-VAE
arXiv - CS - Sound Pub Date : 2021-05-04 , DOI: arxiv-2105.01573
Jennifer Williams, Jason Fong, Erica Cooper, Junichi Yamagishi

This work examines the content and usefulness of disentangled phone and speaker representations from two separately trained VQ-VAE systems: one trained on multilingual data and another trained on monolingual data. We explore the multi- and monolingual models using four small proof-of-concept tasks: copy-synthesis, voice transformation, linguistic code-switching, and content-based privacy masking. From these tasks, we reflect on how disentangled phone and speaker representations can be used to manipulate speech in a meaningful way. Our experiments demonstrate that the VQ representations are suitable for these tasks, including creating new voices by mixing speaker representations together. We also present our novel technique to conceal the content of targeted words within an utterance by manipulating phone VQ codes, while retaining speaker identity and intelligibility of surrounding words. Finally, we discuss recommendations for further increasing the viability of disentangled representations.

中文翻译:

用多语言和单语言VQ-VAE探索解纠缠

这项工作从两个单独训练的VQ-VAE系统中检查了缠结的电话和扬声器表示的内容和有用性:一个训练有多语种数据,另一个训练有单语种数据。我们使用四个小的概念验证任务探索多语言和单语言模型:复制合成,语音转换,语言代码转换和基于内容的隐私屏蔽。从这些任务中,我们反思了如何使用纠缠不清的电话和扬声器表示以有意义的方式操纵语音。我们的实验表明,VQ表示适合于这些任务,包括通过将说话者表示混合在一起来创建新的声音。我们还介绍了一种新颖的技术,可以通过操纵电话VQ代码来将说话中的目标词内容隐藏起来,同时保留说话者的身份和周围单词的清晰度。最后,我们讨论了进一步提高解缠结表示的可行性的建议。
更新日期:2021-05-05
down
wechat
bug