当前位置: X-MOL 学术Neural Process Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Label-Embedding Bi-directional Attentive Model for Multi-label Text Classification
Neural Processing Letters ( IF 2.6 ) Pub Date : 2021-01-01 , DOI: 10.1007/s11063-020-10411-8
Naiyin Liu , Qianlong Wang , Jiangtao Ren

Multi-label text classification is a critical task in natural language processing field. As the latest language representation model, BERT obtains new state-of-the-art results in the classification task. Nevertheless, the text classification framework of BERT neglects to make full use of the token-level text representation and label embedding, since it only utilizes the final hidden state corresponding to CLS token as sequence-level text representation for classification. We assume that the finer-grained token-level text representation and label embedding contribute to classification. Consequently, in this paper, we propose a Label-Embedding Bi-directional Attentive model to improve the performance of BERT’s text classification framework. In particular, we extend BERT’s text classification framework with label embedding and bi-directional attention. Experimental results on the five datasets indicate that our model has notable improvements over both baselines and state-of-the-art models.



中文翻译:

多标签文本分类的标签嵌入双向注意模型

多标签文本分类是自然语言处理领域的一项关键任务。作为最新的语言表示模型,BERT在分类任务中获得了最新的技术成果。但是,BERT的文本分类框架忽略了充分利用令牌级文本表示和标签嵌入的方法,因为它仅将与CLS令牌相对应的最终隐藏状态用作序列级文本表示进行分类。我们假设细粒度的令牌级文本表示和标签嵌入有助于分类。因此,本文提出了一种标签嵌入双向注意模型,以提高BERT文本分类框架的性能。特别是,我们通过标签嵌入和双向注意扩展了BERT的文本分类框架。在这五个数据集上的实验结果表明,我们的模型在基线和最新模型上都有显着改进。

更新日期:2021-01-01
down
wechat
bug