当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
AGGGEN: Ordering and Aggregating while Generating
arXiv - CS - Computation and Language Pub Date : 2021-06-10 , DOI: arxiv-2106.05580
Xinnuo Xu, Ondřej Dušek, Verena Rieser, Ioannis Konstas

We present AGGGEN (pronounced 'again'), a data-to-text model which re-introduces two explicit sentence planning stages into neural data-to-text systems: input ordering and input aggregation. In contrast to previous work using sentence planning, our model is still end-to-end: AGGGEN performs sentence planning at the same time as generating text by learning latent alignments (via semantic facts) between input representation and target text. Experiments on the WebNLG and E2E challenge data show that by using fact-based alignments our approach is more interpretable, expressive, robust to noise, and easier to control, while retaining the advantages of end-to-end systems in terms of fluency. Our code is available at https://github.com/XinnuoXu/AggGen.

中文翻译:

AGGGEN:生成时排序和聚合

我们提出了 AGGGEN(发音为“again”),这是一种数据到文本模型,它将两个显式句子规划阶段重新引入神经数据到文本系统:输入排序和输入聚合。与之前使用句子规划的工作相比,我们的模型仍然是端到端的:AGGGEN 在生成文本的同时通过学习输入表示和目标文本之间的潜在对齐(通过语义事实)来执行句子规划。对 WebNLG 和 E2E 挑战数据的实验表明,通过使用基于事实的对齐,我们的方法更具可解释性、表现力、对噪声的鲁棒性和更容易控制,同时保留了端到端系统在流畅性方面的优势。我们的代码可在 https://github.com/XinnuoXu/AggGen 获得。
更新日期:2021-06-11
down
wechat
bug