当前位置: X-MOL 学术Inf. Retrieval J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ReBoost: a retrieval-boosted sequence-to-sequence model for neural response generation
Information Retrieval Journal ( IF 1.7 ) Pub Date : 2019-09-23 , DOI: 10.1007/s10791-019-09364-x
Yutao Zhu , Zhicheng Dou , Jian-Yun Nie , Ji-Rong Wen

Human–computer conversation is an active research topic in natural language processing. One of the representative methods to build conversation systems uses the sequence-to-sequence (Seq2seq) model through neural networks. However, with limited input information, the Seq2seq model tends to generate meaningless and trivial responses. It can be greatly enhanced if more supplementary information is provided in the generation process. In this work, we propose to utilize retrieved responses to boost the Seq2seq model for generating more informative replies. Our method, called ReBoost, incorporates retrieved results in the Seq2seq model by a hierarchical structure. The input message and retrieved results can influence the generation process jointly. Experiments on two benchmark datasets demonstrate that our model is able to generate more informative responses in both automatic and human evaluations and outperforms the state-of-the-art response generation models.

中文翻译:

ReBoost:用于神经反应生成的检索增强的序列到序列模型

人机对话是自然语言处理中一个活跃的研究主题。建立对话系统的代表性方法之一是通过神经网络使用序列到序列(Seq2seq)模型。但是,在输入信息有限的情况下,Seq2seq模型往往会产生无意义且微不足道的响应。如果在生成过程中提供更多补充信息,则可以大大增强它。在这项工作中,我们建议利用检索到的响应来增强Seq2seq模型,以生成更多信息性答复。我们的方法称为ReBoost,它通过层次结构将检索到的结果合并到Seq2seq模型中。输入消息和检索到的结果可以共同影响生成过程。
更新日期:2019-09-23
down
wechat
bug