当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Exploring Multitask Learning for Low-Resource AbstractiveSummarization
arXiv - CS - Computation and Language Pub Date : 2021-09-17 , DOI: arxiv-2109.08565
Ahmed Magooda, Mohamed Elaraby, Diane Litman

This paper explores the effect of using multitask learning for abstractive summarization in the context of small training corpora. In particular, we incorporate four different tasks (extractive summarization, language modeling, concept detection, and paraphrase detection) both individually and in combination, with the goal of enhancing the target task of abstractive summarization via multitask learning. We show that for many task combinations, a model trained in a multitask setting outperforms a model trained only for abstractive summarization, with no additional summarization data introduced. Additionally, we do a comprehensive search and find that certain tasks (e.g. paraphrase detection) consistently benefit abstractive summarization, not only when combined with other tasks but also when using different architectures and training corpora.

中文翻译:

探索低资源抽象摘要的多任务学习

本文探讨了在小型训练语料库中使用多任务学习进行抽象摘要的效果。特别是,我们将四个不同的任务(提取摘要、语言建模、概念检测和释义检测)单独和组合结合起来,目的是通过多任务学习增强抽象摘要的目标任务。我们表明,对于许多任务组合,在多任务设置中训练的模型优于仅针对抽象摘要训练的模型,没有引入额外的摘要数据。此外,我们进行了全面搜索,发现某些任务(例如释义检测)始终有利于抽象摘要,不仅在与其他任务结合时,而且在使用不同架构和训练语料库时也是如此。
更新日期:2021-09-20
down
wechat
bug