当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Neural Text-to-Speech Model Utilizing Broadcast Data Mixed with Background Music
arXiv - CS - Machine Learning Pub Date : 2021-03-04 , DOI: arxiv-2103.03049
Hanbin Bae, Jae-Sung Bae, Young-Sun Joo, Young-Ik Kim, Hoon-Young Cho

Recently, it has become easier to obtain speech data from various media such as the internet or YouTube, but directly utilizing them to train a neural text-to-speech (TTS) model is difficult. The proportion of clean speech is insufficient and the remainder includes background music. Even with the global style token (GST). Therefore, we propose the following method to successfully train an end-to-end TTS model with limited broadcast data. First, the background music is removed from the speech by introducing a music filter. Second, the GST-TTS model with an auxiliary quality classifier is trained with the filtered speech and a small amount of clean speech. In particular, the quality classifier makes the embedding vector of the GST layer focus on representing the speech quality (filtered or clean) of the input speech. The experimental results verified that the proposed method synthesized much more high-quality speech than conventional methods.

中文翻译:

利用广播数据与背景音乐混合的神经文本语音转换模型

最近,从诸如互联网或YouTube之类的各种媒体获取语音数据变得更加容易,但是直接利用它们来训练神经文本语音转换(TTS)模型却变得困难。干净语音的比例不足,其余包括背景音乐。即使具有全局样式标记(GST)。因此,我们提出以下方法来成功地训练具有有限广播数据的端到端TTS模型。首先,通过引入音乐过滤器将背景音乐从语音中删除。其次,使用过滤后的语音和少量干净语音训练带有辅助质量分类器的GST-TTS模型。特别地,质量分类器使GST层的嵌入矢量集中于表示输入语音的语音质量(已过滤或纯净)。
更新日期:2021-03-05
down
wechat
bug