当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
TLDR: Token Loss Dynamic Reweighting for Reducing Repetitive Utterance Generation
arXiv - CS - Computation and Language Pub Date : 2020-03-26 , DOI: arxiv-2003.11963
Shaojie Jiang, Thomas Wolf, Christof Monz, Maarten de Rijke

Natural Language Generation (NLG) models are prone to generating repetitive utterances. In this work, we study the repetition problem for encoder-decoder models, using both recurrent neural network (RNN) and transformer architectures. To this end, we consider the chit-chat task, where the problem is more prominent than in other tasks that need encoder-decoder architectures. We first study the influence of model architectures. By using pre-attention and highway connections for RNNs, we manage to achieve lower repetition rates. However, this method does not generalize to other models such as transformers. We hypothesize that the deeper reason is that in the training corpora, there are hard tokens that are more difficult for a generative model to learn than others and, once learning has finished, hard tokens are still under-learned, so that repetitive generations are more likely to happen. Based on this hypothesis, we propose token loss dynamic reweighting (TLDR) that applies differentiable weights to individual token losses. By using higher weights for hard tokens and lower weights for easy tokens, NLG models are able to learn individual tokens at different paces. Experiments on chit-chat benchmark datasets show that TLDR is more effective in repetition reduction for both RNN and transformer architectures than baselines using different weighting functions.

中文翻译:

TLDR:用于减少重复话语生成的令牌丢失动态重新加权

自然语言生成 (NLG) 模型容易生成重复的话语。在这项工作中,我们使用循环神经网络 (RNN) 和转换器架构研究编码器-解码器模型的重复问题。为此,我们考虑闲聊任务,该任务的问题比其他需要编码器-解码器架构的任务更为突出。我们首先研究模型架构的影响。通过对 RNN 使用预注意和高速公路连接,我们设法实现了较低的重复率。但是,这种方法不能推广到其他模型,例如变压器。我们假设更深层次的原因是在训练语料库中,有一些硬标记对于生成模型来说比其他模型更难学习,并且一旦学习完成,硬标记仍然学习不足,这样更容易发生重复的世代。基于这一假设,我们提出了代币损失动态重新加权 (TLDR),将可微权重应用于单个代币损失。通过对硬标记使用较高的权重,对简单标记使用较低的权重,NLG 模型能够以不同的速度学习单个标记。在闲聊基准数据集上的实验表明,与使用不同权重函数的基线相比,TLDR 在减少 RNN 和 Transformer 架构的重复方面更有效。
更新日期:2020-04-10
down
wechat
bug