当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Prior Knowledge Driven Label Embedding for Slot Filling in Natural Language Understanding
arXiv - CS - Computation and Language Pub Date : 2020-03-22 , DOI: arxiv-2003.09831
Su Zhu, Zijian Zhao, Rao Ma, and Kai Yu

Traditional slot filling in natural language understanding (NLU) predicts a one-hot vector for each word. This form of label representation lacks semantic correlation modelling, which leads to severe data sparsity problem, especially when adapting an NLU model to a new domain. To address this issue, a novel label embedding based slot filling framework is proposed in this paper. Here, distributed label embedding is constructed for each slot using prior knowledge. Three encoding methods are investigated to incorporate different kinds of prior knowledge about slots: atomic concepts, slot descriptions, and slot exemplars. The proposed label embeddings tend to share text patterns and reuses data with different slot labels. This makes it useful for adaptive NLU with limited data. Also, since label embedding is independent of NLU model, it is compatible with almost all deep learning based slot filling models. The proposed approaches are evaluated on three datasets. Experiments on single domain and domain adaptation tasks show that label embedding achieves significant performance improvement over traditional one-hot label representation as well as advanced zero-shot approaches.

中文翻译:

先验知识驱动标签嵌入用于自然语言理解中的槽填充

自然语言理解 (NLU) 中的传统槽填充预测每个单词的单热向量。这种形式的标签表示缺乏语义相关性建模,这会导致严重的数据稀疏问题,尤其是在将 NLU 模型适应新领域时。为了解决这个问题,本文提出了一种新的基于标签嵌入的槽填充框架。在这里,使用先验知识为每个插槽构建分布式标签嵌入。研究了三种编码方法以结合关于插槽的不同类型的先验知识:原子概念、插槽描述和插槽示例。建议的标签嵌入倾向于共享文本模式并重用具有不同插槽标签的数据。这使得它对于数据有限的自适应 NLU 很有用。此外,由于标签嵌入独立于 NLU 模型,它与几乎所有基于深度学习的槽填充模型兼容。所提出的方法在三个数据集上进行了评估。单域和域适应任务的实验表明,标签嵌入比传统的单热标签表示以及先进的零样本方法实现了显着的性能提升。
更新日期:2020-06-16
down
wechat
bug