当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
LET: Linguistic Knowledge Enhanced Graph Transformer for Chinese Short Text Matching
arXiv - CS - Computation and Language Pub Date : 2021-02-25 , DOI: arxiv-2102.12671
Boer Lyu, Lu Chen, Su Zhu, Kai Yu

Chinese short text matching is a fundamental task in natural language processing. Existing approaches usually take Chinese characters or words as input tokens. They have two limitations: 1) Some Chinese words are polysemous, and semantic information is not fully utilized. 2) Some models suffer potential issues caused by word segmentation. Here we introduce HowNet as an external knowledge base and propose a Linguistic knowledge Enhanced graph Transformer (LET) to deal with word ambiguity. Additionally, we adopt the word lattice graph as input to maintain multi-granularity information. Our model is also complementary to pre-trained language models. Experimental results on two Chinese datasets show that our models outperform various typical text matching approaches. Ablation study also indicates that both semantic information and multi-granularity information are important for text matching modeling.

中文翻译:

LET:用于中文短文本匹配的语言知识增强图形转换器

中文短文本匹配是自然语言处理中的一项基本任务。现有的方法通常采用汉字或单词作为输入标记。它们有两个局限性:1)一些中文单词是多义词,语义信息没有得到充分利用。2)一些模型存在因分词而引起的潜在问题。在这里,我们将HowNet引入作为外部知识库,并提出一种语言知识增强图变换器(LET),以处理单词歧义性。另外,我们采用词格图作为输入来维护多粒度信息。我们的模型还可以补充预训练的语言模型。在两个中文数据集上的实验结果表明,我们的模型优于各种典型的文本匹配方法。
更新日期:2021-02-26
down
wechat
bug