Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ARoBERT: An ASR Robust Pre-Trained Language Model for Spoken Language Understanding
IEEE/ACM Transactions on Audio Speech and Language Processing Pub Date : 2022-02-24 , DOI: 10.1109/taslp.2022.3153268
Chengyu Wang 1 , Suyang Dai 2 , Yipeng Wang 2 , Fei Yang 1 , Minghui Qiu 2 , Kehan Chen 2 , Wei Zhou 2 , Jun Huang 2
Affiliation  



中文翻译:

ARoBERT:用于口语理解的 ASR 鲁棒预训练语言模型

更新日期:2022-02-24
down
wechat
bug