当前位置: X-MOL 学术Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Enhanced aspect-based sentiment analysis models with progressive self-supervised attention learning
Artificial Intelligence ( IF 14.4 ) Pub Date : 2021-02-19 , DOI: 10.1016/j.artint.2021.103477
Jinsong Su , Jialong Tang , Hui Jiang , Ziyao Lu , Yubin Ge , Linfeng Song , Deyi Xiong , Le Sun , Jiebo Luo

In aspect-based sentiment analysis (ABSA), many neural models are equipped with an attention mechanism to quantify the contribution of each context word to sentiment prediction. However, such a mechanism suffers from one drawback: only a few frequent words with sentiment polarities are tended to be taken into consideration for final sentiment decision while abundant infrequent sentiment words are ignored by models. To deal with this issue, we propose a progressive self-supervised attention learning approach for attentional ABSA models. In this approach, we iteratively perform sentiment prediction on all training instances, and continually learn useful attention supervision information in the meantime. During training, at each iteration, context words with the highest impact on sentiment prediction, identified based on their attention weights or gradients, are extracted as words with active/misleading influence on the correct/incorrect prediction for each instance. Words extracted in this way are masked for subsequent iterations. To exploit these extracted words for refining ABSA models, we augment the conventional training objective with a regularization term that encourages ABSA models to not only take full advantage of the extracted active context words but also decrease the weights of those misleading words. We integrate the proposed approach into three state-of-the-art neural ABSA models. Experiment results and in-depth analyses show that our approach yields better attention results and significantly enhances the performance of all three models. We release the source code and trained models at https://github.com/DeepLearnXMU/PSSAttention.



中文翻译:

增强的基于方面的情绪分析模型,具有渐进式自我监督的注意力学习

在基于方面的情感分析(ABSA)中,许多神经模型都配备了注意力机制,用于量化每个上下文词对情感预测的贡献。但是,这种机制有一个缺点:在最终的情感决定中,往往只考虑少数几个带有情感极性的频繁单词,而大量的不频繁单词却被模型所忽略。为了解决这个问题,我们为注意力ABSA模型提出了一种渐进式自我监督的注意力学习方法。通过这种方法,我们对所有训练实例进行迭代的情绪预测,同时不断学习有用的注意力监控信息。在训练期间,在每次迭代中,对情感预测影响最大的上下文单词,根据他们的注意力权重或梯度识别的单词被提取为对每个实例的正确/错误预测有积极/误导影响的单词。以这种方式提取的单词被屏蔽以用于后续迭代。为了利用这些提取出的单词来完善ABSA模型,我们增加了常规训练目标,采用了正规化术语,鼓励ABSA模型不仅充分利用提取出的活跃上下文单词,还减少了那些误导性单词的权重。我们将建议的方法集成到三个最新的神经ABSA模型中。实验结果和深入的分析表明,我们的方法产生了更好的注意力结果,并显着增强了这三种模型的性能。我们在https:// github上发布了源代码和训练有素的模型。

更新日期:2021-02-23
down
wechat
bug