当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Does BERT Understand Sentiment? Leveraging Comparisons Between Contextual and Non-Contextual Embeddings to Improve Aspect-Based Sentiment Models
arXiv - CS - Computation and Language Pub Date : 2020-11-23 , DOI: arxiv-2011.11673
Natesh Reddy, Pranaydeep Singh, Muktabh Mayank Srivastava

When performing Polarity Detection for different words in a sentence, we need to look at the words around to understand the sentiment. Massively pretrained language models like BERT can encode not only just the words in a document but also the context around the words along with them. This begs the questions, "Does a pretrain language model also automatically encode sentiment information about each word?" and "Can it be used to infer polarity towards different aspects?". In this work we try to answer this question by showing that training a comparison of a contextual embedding from BERT and a generic word embedding can be used to infer sentiment. We also show that if we finetune a subset of weights the model built on comparison of BERT and generic word embedding, it can get state of the art results for Polarity Detection in Aspect Based Sentiment Classification datasets.

中文翻译:

BERT了解情绪吗?利用上下文和非上下文嵌入之间的比较来改进基于方面的情感模型

对句子中的不同单词执行极性检测时,我们需要查看周围的单词以了解情绪。像BERT这样的经过大量培训的语言模型不仅可以对文档中的单词进行编码,还可以对单词周围的上下文进行编码。这就引出了一个问题:“预训练语言模型还会自动对每个单词的情感信息进行编码吗?” 和“可以用来推断不同方面的极性吗?”。在这项工作中,我们试图通过展示训练来自BERT的上下文嵌入与通用单词嵌入的比较来推断情绪,以回答这个问题。我们还表明,如果我们对权重的一个子集进行微调,则该模型是基于BERT和通用词嵌入的比较而建立的,
更新日期:2020-11-25
down
wechat
bug