当前位置: X-MOL 学术ACM Trans. Intell. Syst. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
GTAE: Graph Transformer–Based Auto-Encoders for Linguistic-Constrained Text Style Transfer
ACM Transactions on Intelligent Systems and Technology ( IF 7.2 ) Pub Date : 2021-06-16 , DOI: 10.1145/3448733
Yukai Shi 1 , Sen Zhang 2 , Chenxing Zhou 3 , Xiaodan Liang 4 , Xiaojun Yang 1 , Liang Lin 3
Affiliation  

Non-parallel text style transfer has attracted increasing research interests in recent years. Despite successes in transferring the style based on the encoder-decoder framework, current approaches still lack the ability to preserve the content and even logic of original sentences, mainly due to the large unconstrained model space or too simplified assumptions on latent embedding space. Since language itself is an intelligent product of humans with certain grammars and has a limited rule-based model space by its nature, relieving this problem requires reconciling the model capacity of deep neural networks with the intrinsic model constraints from human linguistic rules. To this end, we propose a method called Graph Transformer–based Auto-Encoder, which models a sentence as a linguistic graph and performs feature extraction and style transfer at the graph level, to maximally retain the content and the linguistic structure of original sentences. Quantitative experiment results on three non-parallel text style transfer tasks show that our model outperforms state-of-the-art methods in content preservation, while achieving comparable performance on transfer accuracy and sentence naturalness.

中文翻译:

GTAE:用于语言约束文本样式迁移的基于图形转换器的自动编码器

近年来,非并行文本样式迁移引起了越来越多的研究兴趣。尽管基于编码器-解码器框架的风格转移取得了成功,但目前的方法仍然缺乏保留原始句子的内容甚至逻辑的能力,这主要是由于无约束的模型空间大或对潜在嵌入空间的假设过于简单。由于语言本身是具有特定语法的人类的智能产物,并且本质上具有有限的基于规则的模型空间,因此解决这个问题需要协调深度神经网络的模型容量与人类语言规则的内在模型约束。为此,我们提出了一种称为 Graph Transformer-based Auto-Encoder 的方法,它将句子建模为语言图,并在图级别进行特征提取和样式迁移,以最大限度地保留原始句子的内容和语言结构。三个非并行文本样式迁移任务的定量实验结果表明,我们的模型在内容保存方面优于最先进的方法,同时在迁移准确性和句子自然度方面取得了可比的性能。
更新日期:2021-06-16
down
wechat
bug