当前位置: X-MOL 学术Neurocomputing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Graph-Based Reasoning Model for Multiple Relation Extraction
Neurocomputing ( IF 5.5 ) Pub Date : 2021-01-01 , DOI: 10.1016/j.neucom.2020.09.025
Heyan Huang , Ming Lei , Chong Feng

Abstract Linguistic knowledge is useful for various NLP tasks, but the difficulty lies in the representation and application. We consider that linguistic knowledge is implied in a large-scale corpus, while classification knowledge, the knowledge related to the definitions of entity and relation types, is implied in the labeled training data. Therefore, a corpus subgraph is proposed to mine more linguistic knowledge from the easily accessible unlabeled data, and sentence subgraphs are used to acquire classification knowledge. They jointly constitute a relation knowledge graph (RKG) to extract relations from sentences in this paper. On RKG, entity recognition can be regarded as a property value filling problem and relation classification can be regarded as a link prediction problem. Thus, the multiple relation extraction can be treated as a reasoning process for knowledge completion. We combine statistical reasoning and neural network reasoning to segment sentences into entity chunks and non-entity chunks, then propose a novel Chunk Graph LSTM network to learn the representations of entity chunks and infer the relations among them. The experiments on two standard datasets demonstrate our model outperforms the previous models for multiple relation extraction.

中文翻译:

基于图的多重关系提取推理模型

摘要 语言知识对各种 NLP 任务都有用,但难点在于表示和应用。我们认为语言知识隐含在大规模语料库中,而分类知识,即与实体和关系类型定义相关的知识,隐含在标记的训练数据中。因此,提出了一个语料库子图来从易于访问的未标记数据中挖掘更多的语言知识,并使用句子子图来获取分类知识。他们共同构成了一个关系知识图(RKG)来从本文中的句子中提取关系。在 RKG 上,实体识别可以看作是一个属性值填充问题,关系分类可以看作是一个链接预测问题。因此,多重关系抽取可以看作是知识补全的推理过程。我们结合统计推理和神经网络推理将句子分割为实体块和非实体块,然后提出了一种新的块图 LSTM 网络来学习实体块的表示并推断它们之间的关系。在两个标准数据集上的实验表明,我们的模型在多关系提取方面优于以前的模型。
更新日期:2021-01-01
down
wechat
bug