当前位置:
X-MOL 学术
›
arXiv.cs.CL
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Bag of Tricks for Dialogue Summarization
arXiv - CS - Computation and Language Pub Date : 2021-09-16 , DOI: arxiv-2109.08232 Muhammad Khalifa, Miguel Ballesteros, Kathleen McKeown
arXiv - CS - Computation and Language Pub Date : 2021-09-16 , DOI: arxiv-2109.08232 Muhammad Khalifa, Miguel Ballesteros, Kathleen McKeown
Dialogue summarization comes with its own peculiar challenges as opposed to
news or scientific articles summarization. In this work, we explore four
different challenges of the task: handling and differentiating parts of the
dialogue belonging to multiple speakers, negation understanding, reasoning
about the situation, and informal language understanding. Using a pretrained
sequence-to-sequence language model, we explore speaker name substitution,
negation scope highlighting, multi-task learning with relevant tasks, and
pretraining on in-domain data. Our experiments show that our proposed
techniques indeed improve summarization performance, outperforming strong
baselines.
中文翻译:
一袋对话总结的技巧
与新闻或科学文章摘要相比,对话摘要有其独特的挑战。在这项工作中,我们探索了该任务的四个不同挑战:处理和区分属于多个说话者的对话部分、否定理解、对情况的推理和非正式语言理解。使用预训练的序列到序列语言模型,我们探索说话人姓名替换、否定范围突出显示、相关任务的多任务学习以及域内数据的预训练。我们的实验表明,我们提出的技术确实提高了摘要性能,优于强基线。
更新日期:2021-09-20
中文翻译:
一袋对话总结的技巧
与新闻或科学文章摘要相比,对话摘要有其独特的挑战。在这项工作中,我们探索了该任务的四个不同挑战:处理和区分属于多个说话者的对话部分、否定理解、对情况的推理和非正式语言理解。使用预训练的序列到序列语言模型,我们探索说话人姓名替换、否定范围突出显示、相关任务的多任务学习以及域内数据的预训练。我们的实验表明,我们提出的技术确实提高了摘要性能,优于强基线。