当前位置: X-MOL 学术Complex Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Microblog sentiment analysis based on deep memory network with structural attention
Complex & Intelligent Systems ( IF 5.0 ) Pub Date : 2022-11-18 , DOI: 10.1007/s40747-022-00904-5
Lixin Zhou , Zhenyu Zhang , Laijun Zhao , Pingle Yang

Microblog sentiment analysis has important applications in many fields, such as social media analysis and online product reviews. However, the traditional methods may be challenging to compute the long dependencies between them and easy to lose some semantic information due to low standardization of text and emojis in microblogs. In this paper, we propose a novel deep memory network with structural self-attention, storing long-term contextual information and extracting richer text and emojis information from microblogs, which aims to improve the performance of sentiment analysis. Specifically, the model first utilizes a bidirectional long short-term memory network to extract the semantic information in the microblogs, and considers the extraction results as the memory component of the deep memory network, storing the long dependencies and free of syntactic parser, sentiment lexicon and feature engineering. Then, we consider multi-step structural self-attention operations as the generalization and output components. Furthermore, this study also employs a penalty mechanism to the loss function to promote the diversity across different hops of attention in the model. This study conducted extensive comprehensive experiments with eight baseline methods on real datasets. Results show that our model outperforms those state-of-the-art models, which validates the effectiveness of the proposed model.



中文翻译:

基于结构注意力的深度记忆网络的微博情感分析

微博情感分析在社交媒体分析、在线产品评论等诸多领域都有重要的应用。然而,传统方法可能难以计算它们之间的长依赖关系,并且由于微博中文本和表情符号的标准化程度较低,容易丢失一些语义信息。在本文中,我们提出了一种具有结构自注意力的新型深度记忆网络,可存储长期上下文信息并从微博中提取更丰富的文本和表情符号信息,旨在提高情感分析的性能。具体来说,该模型首先利用双向长短期记忆网络提取微博中的语义信息,并将提取结果视为深度记忆网络的记忆成分,存储长依赖关系,免于句法解析器、情感词典和特征工程。然后,我们将多步结构自注意操作视为泛化和输出组件。此外,本研究还对损失函数采用了惩罚机制,以促进模型中不同注意力跳跃的多样性。本研究在真实数据集上使用八种基线方法进行了广泛的综合实验。结果表明,我们的模型优于那些最先进的模型,这验证了所提出模型的有效性。这项研究还对损失函数采用了惩罚机制,以促进模型中不同注意力跳跃的多样性。本研究在真实数据集上使用八种基线方法进行了广泛的综合实验。结果表明,我们的模型优于那些最先进的模型,这验证了所提出模型的有效性。这项研究还对损失函数采用了惩罚机制,以促进模型中不同注意力跳跃的多样性。本研究在真实数据集上使用八种基线方法进行了广泛的综合实验。结果表明,我们的模型优于那些最先进的模型,这验证了所提出模型的有效性。

更新日期:2022-11-18
down
wechat
bug