当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Asking Clarifying Questions Based on Negative Feedback in Conversational Search
arXiv - CS - Information Retrieval Pub Date : 2021-07-12 , DOI: arxiv-2107.05760
Keping Bi, Qingyao Ai, W. Bruce Croft

Users often need to look through multiple search result pages or reformulate queries when they have complex information-seeking needs. Conversational search systems make it possible to improve user satisfaction by asking questions to clarify users' search intents. This, however, can take significant effort to answer a series of questions starting with "what/why/how". To quickly identify user intent and reduce effort during interactions, we propose an intent clarification task based on yes/no questions where the system needs to ask the correct question about intents within the fewest conversation turns. In this task, it is essential to use negative feedback about the previous questions in the conversation history. To this end, we propose a Maximum-Marginal-Relevance (MMR) based BERT model (MMR-BERT) to leverage negative feedback based on the MMR principle for the next clarifying question selection. Experiments on the Qulac dataset show that MMR-BERT outperforms state-of-the-art baselines significantly on the intent identification task and the selected questions also achieve significantly better performance in the associated document retrieval tasks.

中文翻译:

基于会话搜索中的负面反馈提出澄清问题

当用户有复杂的信息搜索需求时,他们通常需要浏览多个搜索结果页面或重新制定查询。对话式搜索系统可以通过提出问题来阐明用户的搜索意图来提高用户满意度。然而,这可能需要花费大量精力来回答以“什么/为什么/如何”开头的一系列问题。为了在交互过程中快速识别用户意图并减少工作量,我们提出了一个基于是/否问题的意图澄清任务,系统需要在最少的对话轮次内询问有关意图的正确问题。在此任务中,必须对对话历史记录中的先前问题使用负面反馈。为此,我们提出了一种基于最大边际相关性 (MMR) 的 BERT 模型 (MMR-BERT),以利用基于 MMR 原理的负反馈进行下一个澄清问题选择。在 Qulac 数据集上的实验表明,MMR-BERT 在意图识别任务上显着优于最先进的基线,并且所选问题在相关文档检索任务中也取得了显着更好的性能。
更新日期:2021-07-14
down
wechat
bug