当前位置: X-MOL 学术Int. J. Pattern Recognit. Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Joint Pre-Trained Chinese Named Entity Recognition Based on Bi-Directional Language Model
International Journal of Pattern Recognition and Artificial Intelligence ( IF 1.5 ) Pub Date : 2021-04-05 , DOI: 10.1142/s0218001421530037
Changxia Ma 1 , Chen Zhang 2
Affiliation  

The current named entity recognition (NER) is mainly based on joint convolution or recurrent neural network. In order to achieve high performance, these networks need to provide a large amount of training data in the form of feature engineering corpus and lexicons. Chinese NER is very challenging because of the high contextual relevance of Chinese characters, that is, Chinese characters and phrases may have many possible meanings in different contexts. To this end, we propose a model that leverages a pre-trained and bi-directional encoder representations-from-transformers language model and a joint bi-directional long short-term memory (Bi-LSTM) and conditional random fields (CRF) model for Chinese NER. The underlying network layer embeds Chinese characters and outputs character-level representations. The output is then fed into a bidirectional long short-term memory to capture contextual sequence information. The top layer of the proposed model is CRF, which is used to take into account the dependencies of adjacent tags and jointly decode the optimal chain of tags. A series of extensive experiments were conducted to research the useful improvements of the proposed neural network architecture on different datasets without relying heavily on handcrafted features and domain-specific knowledge. Experimental results show that the proposed model is effective, and character-level representation is of great significance for Chinese NER tasks. In addition, through this work, we have composed a new informal conversation message corpus called the autonomous bus information inquiry dataset, and compared to the advanced baseline, our method has been significantly improved.

中文翻译:

基于双向语言模型的联合预训练中文命名实体识别

当前的命名实体识别(NER)主要基于联合卷积或循环神经网络。为了达到高性能,这些网络需要以特征工程语料库和词典的形式提供大量的训练数据。中文 NER 非常具有挑战性,因为汉字的上下文相关性很高,也就是说,汉字和短语在不同的上下文中可能有许多可能的含义。为此,我们提出了一个模型,该模型利用了预训练和双向编码器表示来自转换器的语言模型以及联合双向长短期记忆 (Bi-LSTM) 和条件随机场 (CRF) 模型用于中文 NER。底层网络层嵌入汉字并输出字符级表示。然后将输出馈送到双向长短期记忆中以捕获上下文序列信息。所提出模型的顶层是CRF,用于考虑相邻标签的依赖关系,联合解码最优标签链。进行了一系列广泛的实验,以研究所提出的神经网络架构在不同数据集上的有用改进,而不严重依赖手工制作的特征和特定领域的知识。实验结果表明,该模型是有效的,字符级表示对中文NER任务具有重要意义。此外,通过这项工作,我们组成了一个新的非正式会话消息语料库,称为自主公交信息查询数据集,并与高级基线相比,
更新日期:2021-04-05
down
wechat
bug