当前位置: X-MOL 学术J. Inf. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multi-thread hierarchical deep model for context-aware sentiment analysis
Journal of Information Science ( IF 2.4 ) Pub Date : 2021-02-15 , DOI: 10.1177/0165551521990617
Abdalsamad Keramatfar 1 , Hossein Amirkhani 1 , Amir Jalaly Bidgoly 1
Affiliation  

Real-time messaging and opinion sharing in social media websites have made them valuable sources of different kinds of information. This source provides the opportunity for doing different kinds of analysis. Sentiment analysis as one of the most important of these analyses gains increasing interests. However, the research in this field is still facing challenges. The mainstream of the sentiment analysis research on social media websites and microblogs just exploits the textual content of the posts. This makes the analysis hard because microblog posts are short and noisy. However, they have lots of contexts which can be exploited for sentiment analysis. In order to use the context as an auxiliary source, some recent papers use reply/retweet to model the context of the target post. We claim that multiple sequential contexts can be used jointly in a unified model. In this article, we propose a context-aware multi-thread hierarchical long short-term memory (MHLSTM) that jointly models different kinds of contexts, such as tweep, hashtag and reply besides the content of the target post. Experimental evaluations on a real-world Twitter data set demonstrate that our proposed model can outperform some strong baseline models by 28.39% in terms of relative error reduction.



中文翻译:

用于上下文感知情感分析的多线程分层深度模型

社交媒体网站中的实时消息传递和观点共享使它们成为各种信息的宝贵来源。该资源提供了进行各种分析的机会。作为这些分析中最重要的分析之一,情感分析获得了越来越多的关注。但是,该领域的研究仍面临挑战。社交媒体网站和微博上的情感分析研究的主流只是利用帖子的文本内容。这使分析变得困难,因为微博帖子简短且嘈杂。但是,它们具有很多可以用于情感分析的上下文。为了使用上下文作为辅助来源,最近的一些论文使用回复/转发来对目标帖子的上下文进行建模。我们声称可以在统一模型中联合使用多个顺序上下文。在本文中,我们提出了一种上下文感知的多线程分层长短期内存(MHLSTM),该内存可以联合建模不同种类的上下文,例如目标帖子的内容之外的tweep,hashtag和Reply。在真实世界的Twitter数据集上进行的实验评估表明,就相对误差减少而言,我们提出的模型可以比某些强大的基线模型好28.39%。

更新日期:2021-02-16
down
wechat
bug