当前位置: X-MOL 学术Arab. J. Sci. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Emotion Speech Synthesis Method Based on Multi-Channel Time–Frequency Domain Generative Adversarial Networks (MC-TFD GANs) and Mixup
Arabian Journal for Science and Engineering ( IF 2.6 ) Pub Date : 2021-08-24 , DOI: 10.1007/s13369-021-06090-9
Ning Jia 1 , Chunjun Zheng 1
Affiliation  

As one of the most challenging and promising topics in speech field, emotion speech synthesis is a hot topic in current research. At present, the emotion expression ability, synthesis speed and robustness of synthetic speech need to be improved. Cycle-consistent Adversarial Networks (CycleGAN) provides a two-way breakthrough in the transformation of emotional corpus information. But there is still a gap between the real target and the synthesis speech. In order to narrow this gap, we propose an emotion speech synthesis method combining multi-channel Time–frequency Domain Generative Adversarial Networks (MC-TFD GANs) and Mixup. It includes three stages: multichannel Time–frequency Domain GANs (MC-TFD GANs), loss estimation based on Mixup and effective emotion region stacking based on Mixup. Among them, the gating unit GTLU (gated tanh linear units) and the image expression method of speech saliency region are designed. It combines the Time–frequency Domain MaskCycleGAN based on improved GTLU and the time-domain CycleGAN based on saliency region to form the multi-channel GAN in the first stage. Based on Mixup method, the calculation method of loss and the aggravation degree of emotion region are designed. Compared with several popular speech synthesis methods, the comparative experiments were carried out on the interactive emotional dynamic motion capture (IEMOCAP) corpus. The bi-directional three-layer long short-term memory (LSTM) model was used as the verification model. The experimental results showed that the mean opinion score (MOS) and the unweighted accuracy (UA) of the speech generated by the synthesis method were improved, and the improvements were 4% and 2.7%, respectively. The current model was superior to the existing GANs model in subjective evaluation and objective experiments, ensure that the speech generated by this model had higher reliability, better fluency and emotional expression ability.



中文翻译:

基于多通道时频域生成对抗网络(MC-TFD GANs)和Mixup的情感语音合成方法

作为语音领域最具挑战性和前景的课题之一,情感语音合成是当前研究的热点。目前,合成语音的情感表达能力、合成速度和鲁棒性有待提高。Cycle-consistent Adversarial Networks (CycleGAN) 在情感语料信息的转换方面提供了双向突破。但真实目标与合成语音之间仍有差距。为了缩小这一差距,我们提出了一种结合多通道时频域生成对抗网络 (MC-TFD GAN) 和 Mixup 的情感语音合成方法。它包括三个阶段:多通道时频域 GAN(MC-TFD GAN)、基于 Mixup 的损失估计和基于 Mixup 的有效情感区域堆叠。其中,设计了门控单元GTLU(gated tanh linear units)和语音显着区的图像表达方法。它结合了基于改进GTLU的时域MaskCycleGAN和基于显着区域的时域CycleGAN,形成了第一阶段的多通道GAN。基于Mixup方法,设计了损失和情感区域恶化程度的计算方法。与几种流行的语音合成方法相比,在交互式情感动态动作捕捉(IEMOCAP)语料库上进行了对比实验。双向三层长短期记忆(LSTM)模型被用作验证模型。实验结果表明,该合成方法生成的语音的平均意见得分(MOS)和未加权准确率(UA)都有所提高,分别提高了 4% 和 2.7%。当前模型在主观评价和客观实验上均优于现有的GANs模型,确保该模型生成的语音具有更高的可靠性、更好的流畅性和情感表达能力。

更新日期:2021-08-24
down
wechat
bug