当前位置: X-MOL 学术arXiv.cs.SD › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Correlating Subword Articulation with Lip Shapes for Embedding Aware Audio-Visual Speech Enhancement
arXiv - CS - Sound Pub Date : 2020-09-21 , DOI: arxiv-2009.09561
Hang Chen, Jun Du, Yu Hu, Li-Rong Dai, Bao-Cai Yin, Chin-Hui Lee

In this paper, we propose a visual embedding approach to improving embedding aware speech enhancement (EASE) by synchronizing visual lip frames at the phone and place of articulation levels. We first extract visual embedding from lip frames using a pre-trained phone or articulation place recognizer for visual-only EASE (VEASE). Next, we extract audio-visual embedding from noisy speech and lip videos in an information intersection manner, utilizing a complementarity of audio and visual features for multi-modal EASE (MEASE). Experiments on the TCD-TIMIT corpus corrupted by simulated additive noises show that our proposed subword based VEASE approach is more effective than conventional embedding at the word level. Moreover, visual embedding at the articulation place level, leveraging upon a high correlation between place of articulation and lip shapes, shows an even better performance than that at the phone level. Finally the proposed MEASE framework, incorporating both audio and visual embedding, yields significantly better speech quality and intelligibility than those obtained with the best visual-only and audio-only EASE systems.

中文翻译:

将子词发音与唇形相关联以嵌入有意识的视听语音增强

在本文中,我们提出了一种视觉嵌入方法,通过同步电话和发音级别的视觉唇框来改进嵌入感知语音增强(EASE)。我们首先使用经过预训练的电话或发音位置识别器从仅视觉 EASE (VEASE) 中提取视觉嵌入。接下来,我们以信息交叉的方式从嘈杂的语音和唇部视频中提取视听嵌入,利用多模态 EASE (MEASE) 的音频和视觉特征的互补性。在被模拟加性噪声破坏的 TCD-TIMIT 语料库上的实验表明,我们提出的基于子词的 VEASE 方法在词级别比传统的嵌入更有效。此外,在发音位置级别的视觉嵌入,利用发音部位和唇形之间的高度相关性,显示出比手机级别更好的性能。最后,所提出的 MEASE 框架,结合了音频和视觉嵌入,产生的语音质量和可懂度明显优于最好的纯视觉和纯音频 EASE 系统。
更新日期:2020-09-22
down
wechat
bug