当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Deep Reinforced Model for Zero-Shot Cross-Lingual Summarization with Bilingual Semantic Similarity Rewards
arXiv - CS - Computation and Language Pub Date : 2020-06-27 , DOI: arxiv-2006.15454
Zi-Yi Dou, Sachin Kumar, Yulia Tsvetkov

Cross-lingual text summarization aims at generating a document summary in one language given input in another language. It is a practically important but under-explored task, primarily due to the dearth of available data. Existing methods resort to machine translation to synthesize training data, but such pipeline approaches suffer from error propagation. In this work, we propose an end-to-end cross-lingual text summarization model. The model uses reinforcement learning to directly optimize a bilingual semantic similarity metric between the summaries generated in a target language and gold summaries in a source language. We also introduce techniques to pre-train the model leveraging monolingual summarization and machine translation objectives. Experimental results in both English--Chinese and English--German cross-lingual summarization settings demonstrate the effectiveness of our methods. In addition, we find that reinforcement learning models with bilingual semantic similarity as rewards generate more fluent sentences than strong baselines.

中文翻译:

具有双语语义相似度奖励的零样本跨语言摘要深度强化模型

跨语言文本摘要旨在在给定另一种语言的输入的情况下以一种语言生成文档摘要。这是一项实际重要但未充分探索的任务,主要是由于缺乏可用数据。现有方法求助于机器翻译来合成训练数据,但这种流水线方法会受到错误传播的影响。在这项工作中,我们提出了一个端到端的跨语言文本摘要模型。该模型使用强化学习直接优化目标语言生成的摘要与源语言的黄金摘要之间的双语语义相似度度量。我们还介绍了利用单语摘要和机器翻译目标对模型进行预训练的技术。在英语-汉语和英语-德语跨语言摘要设置中的实验结果证明了我们方法的有效性。此外,我们发现以双语语义相似性作为奖励的强化学习模型比强基线生成更流畅的句子。
更新日期:2020-06-30
down
wechat
bug