当前位置: X-MOL 学术ACM Trans. Asian Low Resour. Lang. Inf. Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Bi-directional Long Short-Term Memory Model with Semantic Positional Attention for the Question Answering System
ACM Transactions on Asian and Low-Resource Language Information Processing ( IF 1.8 ) Pub Date : 2021-06-30 , DOI: 10.1145/3439800
Mingwen Bi 1 , Qingchuan Zhang 2 , Min Zuo 2 , Zelong Xu 2 , Qingyu Jin 2
Affiliation  

The intelligent question answering system aims to provide quick and concise feedback on the questions of users. Although the performance of phrase-level and numerous attention models have been improved, the sentence components and position information are not emphasized enough. This article combines Ci-Lin and word2vec to divide all of the words in the question-answer pairs into groups according to the semantics and select one kernel word in each group. The remaining words are common words and realize the semantic mapping mechanism between kernel words and common words. With this Chinese semantic mapping mechanism, the common words in all questions and answers are replaced by the semantic kernel words to realize the normalization of the semantic representation. Meanwhile, based on the bi-directional LSTM model, this article introduces a method of the combination of semantic role labeling and positional context, dividing the sentence into multiple semantic segments according to semantic logic. The weight is given to the neighboring words in the same semantic segment and propose semantic role labeling position attention based on the bi-directional LSTM model (BLSTM-SRLP). The good performance of the BLSTM-SRLP model has been demonstrated in comparative experiments on the food safety field dataset (FS-QA).

中文翻译:

用于问答系统的具有语义位置注意的双向长短期记忆模型

智能问答系统旨在为用户的问题提供快速、简洁的反馈。尽管短语级和众多注意力模型的性能得到了提高,但句子成分和位置信息并没有得到足够的重视。本文结合 Ci-Lin 和 word2vec 将问答对中的所有单词按语义分组,每组选择一个核心词。其余词为常用词,实现核心词与常用词的语义映射机制。通过这种中文语义映射机制,将所有问答中的常用词替换为语义核词,实现语义表示的规范化。同时,基于双向 LSTM 模型,本文介绍了一种语义角色标注和位置上下文相结合的方法,将句子按照语义逻辑划分为多个语义段。对同一语义段中的相邻词赋予权重,并基于双向LSTM模型(BLSTM-SRLP)提出语义角色标注位置注意。BLSTM-SRLP 模型的良好性能已在食品安全领域数据集 (FS-QA) 的比较实验中得到证明。
更新日期:2021-06-30
down
wechat
bug