当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fine-Tuning BERT for Sentiment Analysis of Vietnamese Reviews
arXiv - CS - Computation and Language Pub Date : 2020-11-20 , DOI: arxiv-2011.10426
Quoc Thai Nguyen, Thoai Linh Nguyen, Ngoc Hoang Luong, Quoc Hung Ngo

Sentiment analysis is an important task in the field ofNature Language Processing (NLP), in which users' feedbackdata on a specific issue are evaluated and analyzed. Manydeep learning models have been proposed to tackle this task, including the recently-introduced Bidirectional Encoder Rep-resentations from Transformers (BERT) model. In this paper,we experiment with two BERT fine-tuning methods for thesentiment analysis task on datasets of Vietnamese reviews: 1) a method that uses only the [CLS] token as the input for anattached feed-forward neural network, and 2) another methodin which all BERT output vectors are used as the input forclassification. Experimental results on two datasets show thatmodels using BERT slightly outperform other models usingGloVe and FastText. Also, regarding the datasets employed inthis study, our proposed BERT fine-tuning method produces amodel with better performance than the original BERT fine-tuning method.

中文翻译:

调整BERT以进行越南语评论的情感分析

情感分析是自然语言处理(NLP)领域的一项重要任务,其中评估和分析用户对特定问题的反馈数据。已经提出了许多深度学习模型来解决该任务,包括最近引入的来自变压器(BERT)模型的双向编码器表示。在本文中,我们针对越南评论的数据集进行了两种BERT微调方法,用于情感分析任务:1)一种仅使用[CLS]令牌作为附加前馈神经网络的输入的方法,以及2)另一种所有BERT输出向量都用作分类输入的方法。在两个数据集上的实验结果表明,使用BERT的模型略胜于使用GloVe和FastText的其他模型。另外,关于本研究中使用的数据集,
更新日期:2020-11-23
down
wechat
bug