当前位置: X-MOL 学术arXiv.cs.SD › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Any-to-Many Voice Conversion with Location-Relative Sequence-to-Sequence Modeling
arXiv - CS - Sound Pub Date : 2020-09-06 , DOI: arxiv-2009.02725
Songxiang Liu, Yuewen Cao, Disong Wang, Xixin Wu, Xunying Liu, Helen Meng

This paper proposes an any-to-many location-relative, sequence-to-sequence (seq2seq) based, non-parallel voice conversion approach. In this approach, we combine a bottle-neck feature extractor (BNE) with a seq2seq based synthesis module. During the training stage, an encoder-decoder based hybrid connectionist-temporal-classification-attention (CTC-attention) phoneme recognizer is trained, whose encoder has a bottle-neck layer. A BNE is obtained from the phoneme recognizer and is utilized to extract speaker-independent, dense and rich linguistic representations from spectral features. Then a multi-speaker location-relative attention based seq2seq synthesis model is trained to reconstruct spectral features from the bottle-neck features, conditioning on speaker representations for speaker identity control in the generated speech. To mitigate the difficulties of using seq2seq based models to align long sequences, we down-sample the input spectral feature along the temporal dimension and equip the synthesis model with a discretized mixture of logistic (MoL) attention mechanism. Since the phoneme recognizer is trained with large speech recognition data corpus, the proposed approach can conduct any-to-many voice conversion. Objective and subjective evaluations shows that the proposed any-to-many approach has superior voice conversion performance in terms of both naturalness and speaker similarity. Ablation studies are conducted to confirm the effectiveness of feature selection and model design strategies in the proposed approach. The proposed VC approach can readily be extended to support any-to-any VC (also known as one/few-shot VC), and achieve high performance according to objective and subjective evaluations.

中文翻译:

具有位置相关序列到序列建模的任意对多语音转换

本文提出了一种基于任意对多位置相关、序列到序列 (seq2seq) 的非并行语音转换方法。在这种方法中,我们将瓶颈特征提取器 (BNE) 与基于 seq2seq 的合成模块相结合。在训练阶段,训练基于编码器-解码器的混合连接-时间-分类-注意(CTC-注意)音素识别器,其编码器具有瓶颈层。BNE 是从音素识别器中获得的,用于从频谱特征中提取与说话者无关的、密集的和丰富的语言表示。然后训练一个基于多说话人位置相对注意力的 seq2seq 合成模型,从瓶颈特征重建频谱特征,调节生成语音中说话人身份控制的说话人表征。为了减轻使用基于 seq2seq 的模型来对齐长序列的困难,我们沿时间维度对输入光谱特征进行下采样,并为合成模型配备逻辑 (MoL) 注意力机制的离散混合。由于音素识别器是用大型语音识别数据语料库训练的,因此所提出的方法可以进行多对多语音转换。客观和主观评估表明,所提出的任意对多方法在自然度和说话人相似度方面都具有优越的语音转换性能。进行消融研究以确认所提出方法中特征选择和模型设计策略的有效性。提议的 VC 方法可以很容易地扩展到支持任意到任意 VC(也称为 one/few-shot VC),
更新日期:2020-11-19
down
wechat
bug