当前位置: X-MOL 学术Inf. Retrieval J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Preference-based interactive multi-document summarisation
Information Retrieval Journal ( IF 1.7 ) Pub Date : 2019-11-19 , DOI: 10.1007/s10791-019-09367-8
Yang Gao , Christian M. Meyer , Iryna Gurevych

Interactive NLP is a promising paradigm to close the gap between automatic NLP systems and the human upper bound. Preference-based interactive learning has been successfully applied, but the existing methods require several thousand interaction rounds even in simulations with perfect user feedback. In this paper, we study preference-based interactive summarisation. To reduce the number of interaction rounds, we propose the Active Preference-based ReInforcement Learning (APRIL) framework. APRIL uses active learning to query the user, preference learning to learn a summary ranking function from the preferences, and neural Reinforcement learning to efficiently search for the (near-)optimal summary. Our results show that users can easily provide reliable preferences over summaries and that APRIL outperforms the state-of-the-art preference-based interactive method in both simulation and real-user experiments.



中文翻译:

基于首选项的交互式多文档摘要

交互式NLP是一种有希望的范例,可以缩小自动NLP系统与人类上限之间的差距。基于首选项的交互式学习已成功应用,但是现有方法甚至在具有完美用户反馈的模拟中也需要数千次交互。在本文中,我们研究了基于偏好的交互式摘要。为了减少交互回合的次数,我们提出了基于主动偏好的强化学习(APRIL)框架。APRIL使用主动学习来查询用户,使用偏好学习偏好中学习汇总排名功能,并使用神经强化学习以有效地搜索(接近)最佳摘要。我们的结果表明,用户可以轻松地提供关于摘要的可靠首选项,并且APRIL在模拟和实际用户实验中均优于基于最新偏好的交互方法。

更新日期:2020-04-21
down
wechat
bug