当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Meta-CoTGAN: A Meta Cooperative Training Paradigm for Improving Adversarial Text Generation
arXiv - CS - Computation and Language Pub Date : 2020-03-12 , DOI: arxiv-2003.11530
Haiyan Yin, Dingcheng Li, Xu Li, Ping Li

Training generative models that can generate high-quality text with sufficient diversity is an important open problem for Natural Language Generation (NLG) community. Recently, generative adversarial models have been applied extensively on text generation tasks, where the adversarially trained generators alleviate the exposure bias experienced by conventional maximum likelihood approaches and result in promising generation quality. However, due to the notorious defect of mode collapse for adversarial training, the adversarially trained generators face a quality-diversity trade-off, i.e., the generator models tend to sacrifice generation diversity severely for increasing generation quality. In this paper, we propose a novel approach which aims to improve the performance of adversarial text generation via efficiently decelerating mode collapse of the adversarial training. To this end, we introduce a cooperative training paradigm, where a language model is cooperatively trained with the generator and we utilize the language model to efficiently shape the data distribution of the generator against mode collapse. Moreover, instead of engaging the cooperative update for the generator in a principled way, we formulate a meta learning mechanism, where the cooperative update to the generator serves as a high level meta task, with an intuition of ensuring the parameters of the generator after the adversarial update would stay resistant against mode collapse. In the experiment, we demonstrate our proposed approach can efficiently slow down the pace of mode collapse for the adversarial text generators. Overall, our proposed method is able to outperform the baseline approaches with significant margins in terms of both generation quality and diversity in the testified domains.

中文翻译:

Meta-CoTGAN:改进对抗性文本生成的元合作训练范式

训练能够生成具有足够多样性的高质量文本的生成模型是自然语言生成 (NLG) 社区的一个重要的开放性问题。最近,生成对抗模型已广泛应用于文本生成任务,其中经过对抗训练的生成器减轻了传统最大似然方法所经历的曝光偏差,并产生了有希望的生成质量。然而,由于对抗训练模式崩溃的臭名昭著的缺陷,对抗训练的生成器面临着质量多样性的权衡,即生成器模型往往会严重牺牲生成多样性来提高生成质量。在本文中,我们提出了一种新方法,旨在通过有效地降低对抗性训练的模式崩溃来提高对抗性文本生成的性能。为此,我们引入了一种合作训练范式,其中语言模型与生成器合作训练,我们利用语言模型有效地塑造生成器的数据分布以防止模式崩溃。此外,我们没有以原则性的方式对生成器进行协作更新,而是制定了元学习机制,其中对生成器的协作更新作为高级元任务,直觉上确保生成器的参数在对抗性更新将保持对模式崩溃的抵抗力。在实验中,我们证明了我们提出的方法可以有效地减缓对抗文本生成器模式崩溃的速度。总体而言,我们提出的方法能够在验证域的生成质量和多样性方面以显着的优势优于基线方法。
更新日期:2020-03-26
down
wechat
bug