当前位置:
X-MOL 学术
›
arXiv.cs.HC
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Refocusing on Relevance: Personalization in NLG
arXiv - CS - Human-Computer Interaction Pub Date : 2021-09-10 , DOI: arxiv-2109.05140 Shiran Dudy, Steven Bedrick, Bonnie Webber
arXiv - CS - Human-Computer Interaction Pub Date : 2021-09-10 , DOI: arxiv-2109.05140 Shiran Dudy, Steven Bedrick, Bonnie Webber
Many NLG tasks such as summarization, dialogue response, or open domain
question answering focus primarily on a source text in order to generate a
target response. This standard approach falls short, however, when a user's
intent or context of work is not easily recoverable based solely on that source
text -- a scenario that we argue is more of the rule than the exception. In
this work, we argue that NLG systems in general should place a much higher
level of emphasis on making use of additional context, and suggest that
relevance (as used in Information Retrieval) be thought of as a crucial tool
for designing user-oriented text-generating tasks. We further discuss possible
harms and hazards around such personalization, and argue that value-sensitive
design represents a crucial path forward through these challenges.
中文翻译:
重新关注相关性:NLG 中的个性化
许多 NLG 任务,例如摘要、对话响应或开放域问答,主要关注源文本以生成目标响应。然而,当用户的意图或工作上下文无法仅根据该源文本轻松恢复时,这种标准方法就存在不足——我们认为这种情况更像是规则而不是例外。在这项工作中,我们认为 NLG 系统一般应该更加重视利用额外的上下文,并建议将相关性(如信息检索中使用的)视为设计面向用户的文本的关键工具- 生成任务。我们进一步讨论了围绕这种个性化可能带来的危害和危害,并认为价值敏感的设计代表了克服这些挑战的关键途径。
更新日期:2021-09-14
中文翻译:
重新关注相关性:NLG 中的个性化
许多 NLG 任务,例如摘要、对话响应或开放域问答,主要关注源文本以生成目标响应。然而,当用户的意图或工作上下文无法仅根据该源文本轻松恢复时,这种标准方法就存在不足——我们认为这种情况更像是规则而不是例外。在这项工作中,我们认为 NLG 系统一般应该更加重视利用额外的上下文,并建议将相关性(如信息检索中使用的)视为设计面向用户的文本的关键工具- 生成任务。我们进一步讨论了围绕这种个性化可能带来的危害和危害,并认为价值敏感的设计代表了克服这些挑战的关键途径。