当前位置: X-MOL 学术Comput. J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Encoder–Decoder Couplet Generation Model Based on ‘Trapezoidal Context’ Character Vector
The Computer Journal ( IF 1.5 ) Pub Date : 2020-06-24 , DOI: 10.1093/comjnl/bxaa048
Rui Gao 1 , Yuanyuan Zhu 2 , Mingye Li 3 , Shoufeng Li 1 , Xiaohu Shi 1
Affiliation  

This paper studies the couplet generation model which automatically generates the second line of a couplet by giving the first line. Unlike other sequence generation problems, couplet generation not only considers the sequential context within a sentence line but also emphasizes the relationships between the corresponding words of first and second lines. Therefore, a trapezoidal context character embedding the vector model has been developed firstly, which considers the ‘sequence context’ and the ‘corresponding word context’ simultaneously. Afterwards, we chose the typical encoder–decoder framework to solve the sequence–sequence problems, of which the encoder and decoder are used by bi-directional GRU and GRU, respectively. In order to further increase the semantic consistency of the first and second lines of couplets, the pre-trained sentence vector of the first line is added to the attention mechanism in the model. To verify the effectiveness of the method, it is applied to the real data set. Experimental results show that our proposed model can compete with the up-to-date methods, and both adding sentence vectors to attention and using trapezoidal context character vectors can improve the effectiveness of the algorithm.

中文翻译:

基于“梯形上下文”特征向量的编解码器对联生成模型

本文研究了对联生成模型,该模型通过给出第一行自动生成对联的第二行。与其他序列生成问题不同,对联生成不仅考虑句子行内的顺序上下文,还强调了第一行和第二行的相应单词之间的关系。因此,首先开发了嵌入矢量模型的梯形上下文特征,它同时考虑了“序列上下文”和“对应词上下文”。之后,我们选择了典型的编码器-解码器框架来解决序列-序列问题,双向GRU和GRU分别使用了编码器和解码器。为了进一步提高对联第一行和第二行的语义一致性,将第一行的预训练句子向量添加到模型中的注意机制。为了验证该方法的有效性,将其应用于实际数据集。实验结果表明,我们提出的模型可以与最新方法抗衡,增加句子向量的注意力和使用梯形上下文特征向量都可以提高算法的有效性。
更新日期:2020-06-24
down
wechat
bug