当前位置:
X-MOL 学术
›
World Wide Web
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
End-to-End latent-variable task-oriented dialogue system with exact log-likelihood optimization
World Wide Web ( IF 3.7 ) Pub Date : 2019-06-07 , DOI: 10.1007/s11280-019-00688-8 Haotian Xu , Haiyun Peng , Haoran Xie , Erik Cambria , Liuyang Zhou , Weiguo Zheng
World Wide Web ( IF 3.7 ) Pub Date : 2019-06-07 , DOI: 10.1007/s11280-019-00688-8 Haotian Xu , Haiyun Peng , Haoran Xie , Erik Cambria , Liuyang Zhou , Weiguo Zheng
We propose an end-to-end dialogue model based on a hierarchical encoder-decoder, which employed a discrete latent variable to learn underlying dialogue intentions. The system is able to model the structure of utterances dominated by statistics of the language and the dependencies among utterances in dialogues without manual dialogue state design. We argue that the latent discrete variable interprets the intentions that guide machine responses generation. We also propose a model which can be refined autonomously with reinforcement learning, due to that intention selection at each dialogue turn can be formulated as a sequential decision-making process. Our experiments show that exact MLE optimized model is much more robust than neural variational inference on dialogue success rate with limited BLEU sacrifice.
中文翻译:
具有精确对数可能性优化的端到端潜在变量面向任务的对话系统
我们提出了一种基于分层编码器-解码器的端到端对话模型,该模型采用了离散的潜在变量来学习潜在的对话意图。该系统无需手动对话状态设计,就能够对以语言统计数据和对话中话语之间的依赖性为主导的话语结构进行建模。我们认为潜在的离散变量解释了指导机器响应生成的意图。我们还提出了一个可以通过强化学习自主完善的模型,因为可以将每个对话回合的意图选择表达为顺序决策过程。我们的实验表明,精确的MLE优化模型比在有限BLEU牺牲的情况下对话成功率的神经变异推断要强得多。
更新日期:2019-06-07
中文翻译:
具有精确对数可能性优化的端到端潜在变量面向任务的对话系统
我们提出了一种基于分层编码器-解码器的端到端对话模型,该模型采用了离散的潜在变量来学习潜在的对话意图。该系统无需手动对话状态设计,就能够对以语言统计数据和对话中话语之间的依赖性为主导的话语结构进行建模。我们认为潜在的离散变量解释了指导机器响应生成的意图。我们还提出了一个可以通过强化学习自主完善的模型,因为可以将每个对话回合的意图选择表达为顺序决策过程。我们的实验表明,精确的MLE优化模型比在有限BLEU牺牲的情况下对话成功率的神经变异推断要强得多。