当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Multilingual Representation for Natural Language Understanding with Enhanced Cross-Lingual Supervision
arXiv - CS - Computation and Language Pub Date : 2021-06-09 , DOI: arxiv-2106.05166
Yinpeng Guo, Liangyou Li, Xin Jiang, Qun Liu

Recently, pre-training multilingual language models has shown great potential in learning multilingual representation, a crucial topic of natural language processing. Prior works generally use a single mixed attention (MA) module, following TLM (Conneau and Lample, 2019), for attending to intra-lingual and cross-lingual contexts equivalently and simultaneously. In this paper, we propose a network named decomposed attention (DA) as a replacement of MA. The DA consists of an intra-lingual attention (IA) and a cross-lingual attention (CA), which model intralingual and cross-lingual supervisions respectively. In addition, we introduce a language-adaptive re-weighting strategy during training to further boost the model's performance. Experiments on various cross-lingual natural language understanding (NLU) tasks show that the proposed architecture and learning strategy significantly improve the model's cross-lingual transferability.

中文翻译:

通过增强的跨语言监督学习自然语言理解的多语言表示

最近,预训练多语言语言模型在学习多语言表示方面显示出巨大的潜力,这是自然语言处理的一个关键主题。先前的工作通常使用一个单一的混合注意 (MA) 模块,遵循 TLM(Conneau 和 Lample,2019),用于等效且同时地关注语言内和跨语言上下文。在本文中,我们提出了一个名为分解注意力(DA)的网络作为 MA 的替代品。DA 由语言内注意 (IA) 和跨语言注意 (CA) 组成,分别对语言内和跨语言监督进行建模。此外,我们在训练期间引入了一种语言自适应重加权策略,以进一步提高模型的性能。
更新日期:2021-06-10
down
wechat
bug