当前位置: X-MOL 学术Comput. Speech Lang › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An intention multiple-representation model with expanded information
Computer Speech & Language ( IF 4.3 ) Pub Date : 2021-01-17 , DOI: 10.1016/j.csl.2021.101196
Jingxiang Hu , Junjie Peng , Wenqiang Zhang , Lizhe Qi , Miao Hu , Huanxiang Zhang

Short text is the main carrier for people to express their ideas and opinions. And it is very important as well a big challenge to understand the meaning of short text or recognize the semantic patterns of different short texts. Most of existing methods use word embedding and short text interaction to learn short text pairs semantic patterns. However, some of these methods are complicated and cannot fully capture the relations of words and the interaction of short text pairs. To solve this problem, a self-attention based model, that is, Knowledge Learning for Matching Question (KLMQ) is proposed. Using part-of-speech to mine the relations of words, and the model obtains the relations of short texts from the grammar, syntax and morphology. Meanwhile, it adopts information fusion strategy to enhance the interaction between short text pairs, which ensures the model works well with the expanded information, such as the order of words, the correlation of words and the relations of short text pairs. To verify the correctness of the proposed model and the efficiency of the expanded information, extensive experiments were carried out on public datasets. Experimental results show that the model performs better than that of traditional neural network models and the expanded information can much improve the performances of intention multiple-representation recognition.



中文翻译:

具有扩展信息的意图多表示模型

简短的文字是人们表达思想和观点的主要载体。要理解短文本的含义或识别不同短文本的语义模式,这也是非常重要的挑战。现有的大多数方法都使用词嵌入和短文本交互来学习短文本对的语义模式。但是,这些方法中的某些方法很复杂,无法完全捕获单词之间的关系以及短文本对之间的相互作用。为了解决这个问题,提出了一种基于自我注意的模型,即匹配问题的知识学习(KLMQ)。利用词类挖掘词的关系,该模型从语法,句法和词法上获得短文本的关系。同时,它采用信息融合策略来增强短文本对之间的交互,这可以确保模型与扩展的信息(例如单词顺序,单词的相关性和短文本对的关系)一起很好地工作。为了验证所提出模型的正确性和扩展信息的效率,对公共数据集进行了广泛的实验。实验结果表明,该模型的性能优于传统的神经网络模型,扩展后的信息可以大大提高意图多表示识别的性能。

更新日期:2021-01-20
down
wechat
bug